Test Report: QEMU_macOS 16578

                    
                      d4c33ff371b38c9e245a0eee82030d8958ba8577:2023-06-10:29644
                    
                

Test fail (86/258)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.57
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.83
24 TestAddons/parallel/Registry 721.04
25 TestAddons/parallel/Ingress 0.79
26 TestAddons/parallel/InspektorGadget 480.9
27 TestAddons/parallel/MetricsServer 720.87
30 TestAddons/parallel/CSI 387.31
37 TestCertOptions 10.04
38 TestCertExpiration 195.24
39 TestDockerFlags 10.19
40 TestForceSystemdFlag 11.15
41 TestForceSystemdEnv 10.04
84 TestFunctional/parallel/ServiceCmdConnect 33.37
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
151 TestImageBuild/serial/BuildWithBuildArg 1.02
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 54.96
192 TestMinikubeProfile 21.89
200 TestMountStart/serial/VerifyMountPostDelete 101.04
209 TestMultiNode/serial/StopNode 378.32
210 TestMultiNode/serial/StartAfterStop 230.27
211 TestMultiNode/serial/RestartKeepsNodes 41.54
212 TestMultiNode/serial/DeleteNode 0.1
213 TestMultiNode/serial/StopMultiNode 0.17
214 TestMultiNode/serial/RestartMultiNode 5.25
215 TestMultiNode/serial/ValidateNameConflict 10.2
219 TestPreload 10.22
221 TestScheduledStopUnix 10.05
222 TestSkaffold 16.16
225 TestRunningBinaryUpgrade 126.2
227 TestKubernetesUpgrade 15.21
240 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.55
241 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.3
249 TestStoppedBinaryUpgrade/Setup 145.26
250 TestStoppedBinaryUpgrade/Upgrade 2.8
251 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
254 TestNoKubernetes/serial/StartWithK8s 9.73
255 TestNoKubernetes/serial/StartWithStopK8s 5.47
256 TestNoKubernetes/serial/Start 5.47
260 TestNoKubernetes/serial/StartNoArgs 5.46
263 TestPause/serial/Start 9.79
264 TestNetworkPlugins/group/auto/Start 9.72
265 TestNetworkPlugins/group/kindnet/Start 9.74
266 TestNetworkPlugins/group/calico/Start 9.72
267 TestNetworkPlugins/group/custom-flannel/Start 9.84
268 TestNetworkPlugins/group/false/Start 9.79
269 TestNetworkPlugins/group/enable-default-cni/Start 9.66
270 TestNetworkPlugins/group/flannel/Start 9.72
271 TestNetworkPlugins/group/bridge/Start 10.7
272 TestNetworkPlugins/group/kubenet/Start 9.74
274 TestStartStop/group/old-k8s-version/serial/FirstStart 9.97
276 TestStartStop/group/no-preload/serial/FirstStart 9.87
277 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
278 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
281 TestStartStop/group/old-k8s-version/serial/SecondStart 6.94
282 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
283 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
284 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
285 TestStartStop/group/old-k8s-version/serial/Pause 0.1
287 TestStartStop/group/embed-certs/serial/FirstStart 11.4
288 TestStartStop/group/no-preload/serial/DeployApp 0.1
289 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/no-preload/serial/SecondStart 7.07
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/no-preload/serial/Pause 0.1
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.03
299 TestStartStop/group/embed-certs/serial/DeployApp 0.1
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
303 TestStartStop/group/embed-certs/serial/SecondStart 6.99
304 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
305 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
307 TestStartStop/group/embed-certs/serial/Pause 0.1
309 TestStartStop/group/newest-cni/serial/FirstStart 11.25
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.97
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
323 TestStartStop/group/newest-cni/serial/SecondStart 5.24
326 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
327 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (24.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (24.568143375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a031b57-1164-4b82-823a-dfa5163dc748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-879000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"68e0e07b-a167-4fc4-8996-d45ea779461b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16578"}}
	{"specversion":"1.0","id":"cc9b0ca2-6310-49fd-97fa-5fc1ebdb0b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig"}}
	{"specversion":"1.0","id":"ad47b271-4228-4dcb-a540-e1dd360edf9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b07abcbf-e4ab-4b63-9178-3a25b201334b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b7e43df3-44b4-4e0c-97a7-06c7baac66dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube"}}
	{"specversion":"1.0","id":"227b9e16-9c21-440e-8c2a-6389d567c6ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"83bb0a44-a144-4787-b219-98ab8c0dad2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"22518c04-2ee3-43f0-9878-e9d9786bcfa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"acb89406-3959-46f3-a5de-9d11b51efc21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"88d5857a-a024-4992-b275-10f660146ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-879000 in cluster download-only-879000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd79a2e3-2b31-4e27-a514-b4b2050092e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb6700cd-006c-40f3-902c-cfb2a1a1f2f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28] Decompressors:map[bz2:0x14000526928 gz:0x14000526980 tar:0x14000526930 tar.bz2:0x14000526940 tar.gz:0x14000526950 tar.xz:0x14000526960 tar.zst:0x14000526970 tbz2:0x14000526940 tgz:0x140005
26950 txz:0x14000526960 tzst:0x14000526970 xz:0x14000526988 zip:0x14000526990 zst:0x140005269a0] Getters:map[file:0x140010a65a0 http:0x14000acc140 https:0x14000acc190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f3676e7c-35ff-480e-96ee-1392a4cae746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:21:09.082342    1566 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:09.082479    1566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:09.082482    1566 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:09.082484    1566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:09.082556    1566 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	W0610 09:21:09.082614    1566 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: no such file or directory
	I0610 09:21:09.083773    1566 out.go:303] Setting JSON to true
	I0610 09:21:09.100627    1566 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1240,"bootTime":1686412829,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:09.100688    1566 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:09.105725    1566 out.go:97] [download-only-879000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:09.108740    1566 out.go:169] MINIKUBE_LOCATION=16578
	I0610 09:21:09.105888    1566 notify.go:220] Checking for updates...
	W0610 09:21:09.105902    1566 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 09:21:09.113627    1566 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:09.116758    1566 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:09.119697    1566 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:09.122717    1566 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	W0610 09:21:09.127014    1566 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 09:21:09.127237    1566 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:09.132703    1566 out.go:97] Using the qemu2 driver based on user configuration
	I0610 09:21:09.132723    1566 start.go:297] selected driver: qemu2
	I0610 09:21:09.132727    1566 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:09.132797    1566 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:09.136697    1566 out.go:169] Automatically selected the socket_vmnet network
	I0610 09:21:09.142009    1566 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 09:21:09.142085    1566 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 09:21:09.142120    1566 cni.go:84] Creating CNI manager for ""
	I0610 09:21:09.142136    1566 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:21:09.142140    1566 start_flags.go:319] config:
	{Name:download-only-879000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-879000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:09.142297    1566 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:09.146699    1566 out.go:97] Downloading VM boot image ...
	I0610 09:21:09.146716    1566 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso
	I0610 09:21:19.325617    1566 out.go:97] Starting control plane node download-only-879000 in cluster download-only-879000
	I0610 09:21:19.325645    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:19.426955    1566 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 09:21:19.427026    1566 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:19.427218    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:19.432366    1566 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 09:21:19.432375    1566 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:19.661311    1566 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 09:21:32.022545    1566 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:32.022682    1566 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:32.673657    1566 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 09:21:32.673847    1566 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/download-only-879000/config.json ...
	I0610 09:21:32.673866    1566 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/download-only-879000/config.json: {Name:mk8ea572823972a0ca150d4787089a831e408f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:32.674099    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:32.674285    1566 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0610 09:21:33.582780    1566 out.go:169] 
	W0610 09:21:33.586665    1566 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28] Decompressors:map[bz2:0x14000526928 gz:0x14000526980 tar:0x14000526930 tar.bz2:0x14000526940 tar.gz:0x14000526950 tar.xz:0x14000526960 tar.zst:0x14000526970 tbz2:0x14000526940 tgz:0x14000526950 txz:0x14000526960 tzst:0x14000526970 xz:0x14000526988 zip:0x14000526990 zst:0x140005269a0] Getters:map[file:0x140010a65a0 http:0x14000acc140 https:0x14000acc190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 09:21:33.586692    1566 out_reason.go:110] 
	W0610 09:21:33.593774    1566 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 09:21:33.597773    1566 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-879000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (24.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-407000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-407000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.66677s)

                                                
                                                
-- stdout --
	* [offline-docker-407000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-407000 in cluster offline-docker-407000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:13:32.278922    4039 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:13:32.279053    4039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:32.279056    4039 out.go:309] Setting ErrFile to fd 2...
	I0610 10:13:32.279059    4039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:32.279125    4039 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:13:32.280081    4039 out.go:303] Setting JSON to false
	I0610 10:13:32.296160    4039 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4383,"bootTime":1686412829,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:13:32.296242    4039 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:13:32.300462    4039 out.go:177] * [offline-docker-407000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:13:32.308366    4039 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:13:32.312308    4039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:13:32.308428    4039 notify.go:220] Checking for updates...
	I0610 10:13:32.318599    4039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:13:32.321273    4039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:13:32.324393    4039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:13:32.327319    4039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:13:32.330443    4039 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:13:32.334336    4039 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:13:32.340305    4039 start.go:297] selected driver: qemu2
	I0610 10:13:32.340310    4039 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:13:32.340317    4039 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:13:32.342198    4039 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:13:32.345350    4039 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:13:32.348402    4039 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:13:32.348419    4039 cni.go:84] Creating CNI manager for ""
	I0610 10:13:32.348424    4039 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:13:32.348429    4039 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:13:32.348434    4039 start_flags.go:319] config:
	{Name:offline-docker-407000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:13:32.348533    4039 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:13:32.352316    4039 out.go:177] * Starting control plane node offline-docker-407000 in cluster offline-docker-407000
	I0610 10:13:32.360384    4039 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:13:32.360431    4039 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:13:32.360440    4039 cache.go:57] Caching tarball of preloaded images
	I0610 10:13:32.360522    4039 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:13:32.360528    4039 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:13:32.361617    4039 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/offline-docker-407000/config.json ...
	I0610 10:13:32.361652    4039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/offline-docker-407000/config.json: {Name:mk64617618376e3a967c969d1e1e7651c134ae89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:13:32.361885    4039 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:13:32.361899    4039 start.go:364] acquiring machines lock for offline-docker-407000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:32.361925    4039 start.go:368] acquired machines lock for "offline-docker-407000" in 22µs
	I0610 10:13:32.361935    4039 start.go:93] Provisioning new machine with config: &{Name:offline-docker-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:32.361969    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:32.366339    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:32.380687    4039 start.go:159] libmachine.API.Create for "offline-docker-407000" (driver="qemu2")
	I0610 10:13:32.380717    4039 client.go:168] LocalClient.Create starting
	I0610 10:13:32.380779    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:32.380800    4039 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:32.380812    4039 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:32.380867    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:32.380881    4039 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:32.380888    4039 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:32.381213    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:32.497769    4039 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:32.613734    4039 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:32.613744    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:32.613919    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:32.623161    4039 main.go:141] libmachine: STDOUT: 
	I0610 10:13:32.623184    4039 main.go:141] libmachine: STDERR: 
	I0610 10:13:32.623261    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2 +20000M
	I0610 10:13:32.631348    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:32.631369    4039 main.go:141] libmachine: STDERR: 
	I0610 10:13:32.631389    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:32.631398    4039 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:32.631433    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b7:98:cb:ef:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:32.633163    4039 main.go:141] libmachine: STDOUT: 
	I0610 10:13:32.633181    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:32.633205    4039 client.go:171] LocalClient.Create took 252.486333ms
	I0610 10:13:34.635284    4039 start.go:128] duration metric: createHost completed in 2.273331917s
	I0610 10:13:34.635304    4039 start.go:83] releasing machines lock for "offline-docker-407000", held for 2.2734095s
	W0610 10:13:34.635314    4039 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:34.642054    4039 out.go:177] * Deleting "offline-docker-407000" in qemu2 ...
	W0610 10:13:34.649293    4039 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:34.649301    4039 start.go:702] Will try again in 5 seconds ...
	I0610 10:13:39.651434    4039 start.go:364] acquiring machines lock for offline-docker-407000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:39.651845    4039 start.go:368] acquired machines lock for "offline-docker-407000" in 306.084µs
	I0610 10:13:39.651930    4039 start.go:93] Provisioning new machine with config: &{Name:offline-docker-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:39.652279    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:39.657932    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:39.705658    4039 start.go:159] libmachine.API.Create for "offline-docker-407000" (driver="qemu2")
	I0610 10:13:39.705713    4039 client.go:168] LocalClient.Create starting
	I0610 10:13:39.705867    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:39.705922    4039 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:39.705943    4039 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:39.706042    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:39.706075    4039 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:39.706088    4039 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:39.706674    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:39.831949    4039 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:39.860084    4039 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:39.860090    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:39.860245    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:39.868622    4039 main.go:141] libmachine: STDOUT: 
	I0610 10:13:39.868638    4039 main.go:141] libmachine: STDERR: 
	I0610 10:13:39.868699    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2 +20000M
	I0610 10:13:39.875881    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:39.875895    4039 main.go:141] libmachine: STDERR: 
	I0610 10:13:39.875911    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:39.875917    4039 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:39.875953    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ab:47:f0:68:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/offline-docker-407000/disk.qcow2
	I0610 10:13:39.877449    4039 main.go:141] libmachine: STDOUT: 
	I0610 10:13:39.877463    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:39.877474    4039 client.go:171] LocalClient.Create took 171.75475ms
	I0610 10:13:41.879662    4039 start.go:128] duration metric: createHost completed in 2.227389625s
	I0610 10:13:41.879713    4039 start.go:83] releasing machines lock for "offline-docker-407000", held for 2.227879292s
	W0610 10:13:41.880007    4039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:41.889581    4039 out.go:177] 
	W0610 10:13:41.893742    4039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:13:41.893789    4039 out.go:239] * 
	* 
	W0610 10:13:41.896257    4039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:13:41.905523    4039 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-407000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-06-10 10:13:41.925487 -0700 PDT m=+3152.992043959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-407000 -n offline-docker-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-407000 -n offline-docker-407000: exit status 7 (59.911292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-407000
--- FAIL: TestOffline (9.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (721.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001575542s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-06-10 09:40:39.919921 -0700 PDT m=+1170.911728209
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-098000 -n addons-098000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-098000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | --download-only -p             | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | binary-mirror-025000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000        | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | -p addons-098000               | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:28 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:28 PDT | 10 Jun 23 09:28 PDT |
	|         | addons-098000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:54.764352    1637 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:54.764757    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764761    1637 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:54.764764    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764861    1637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:21:54.766294    1637 out.go:303] Setting JSON to false
	I0610 09:21:54.781540    1637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1285,"bootTime":1686412829,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:54.781615    1637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:54.786460    1637 out.go:177] * [addons-098000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:54.793542    1637 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:21:54.798440    1637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:54.793561    1637 notify.go:220] Checking for updates...
	I0610 09:21:54.804413    1637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:54.807450    1637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:54.810460    1637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:21:54.811765    1637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:21:54.814627    1637 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:54.818412    1637 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:21:54.823426    1637 start.go:297] selected driver: qemu2
	I0610 09:21:54.823432    1637 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:54.823441    1637 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:21:54.825256    1637 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:54.828578    1637 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:21:54.831535    1637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:21:54.831554    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:21:54.831575    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:54.831579    1637 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:21:54.831586    1637 start_flags.go:319] config:
	{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:54.831700    1637 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:54.840445    1637 out.go:177] * Starting control plane node addons-098000 in cluster addons-098000
	I0610 09:21:54.844425    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:54.844451    1637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:54.844469    1637 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:54.844530    1637 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:21:54.844535    1637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:54.844735    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:21:54.844750    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json: {Name:mkfbe060a3258f68fbe8b01ce26e4a7ada2f24f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:54.844947    1637 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:21:54.844969    1637 start.go:364] acquiring machines lock for addons-098000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:21:54.845063    1637 start.go:368] acquired machines lock for "addons-098000" in 89.292µs
	I0610 09:21:54.845075    1637 start.go:93] Provisioning new machine with config: &{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:21:54.845115    1637 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:21:54.853376    1637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:21:55.217388    1637 start.go:159] libmachine.API.Create for "addons-098000" (driver="qemu2")
	I0610 09:21:55.217427    1637 client.go:168] LocalClient.Create starting
	I0610 09:21:55.217549    1637 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:21:55.301145    1637 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:21:55.414002    1637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:21:55.826273    1637 main.go:141] libmachine: Creating SSH key...
	I0610 09:21:55.859428    1637 main.go:141] libmachine: Creating Disk image...
	I0610 09:21:55.859434    1637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:21:55.859612    1637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.941560    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:55.941581    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.941655    1637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2 +20000M
	I0610 09:21:55.948999    1637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:21:55.949013    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.949042    1637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.949049    1637 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:21:55.949080    1637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e2:60:7a:4e:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:56.034280    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:56.034334    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:56.034338    1637 main.go:141] libmachine: Attempt 0
	I0610 09:21:56.034355    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:21:58.036587    1637 main.go:141] libmachine: Attempt 1
	I0610 09:21:58.036664    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:00.038868    1637 main.go:141] libmachine: Attempt 2
	I0610 09:22:00.038909    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:02.040980    1637 main.go:141] libmachine: Attempt 3
	I0610 09:22:02.040996    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:04.043076    1637 main.go:141] libmachine: Attempt 4
	I0610 09:22:04.043113    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:06.045175    1637 main.go:141] libmachine: Attempt 5
	I0610 09:22:06.045200    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047388    1637 main.go:141] libmachine: Attempt 6
	I0610 09:22:08.047472    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047875    1637 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 09:22:08.047987    1637 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6485f4af}
	I0610 09:22:08.048012    1637 main.go:141] libmachine: Found match: c2:e2:60:7a:4e:46
	I0610 09:22:08.048053    1637 main.go:141] libmachine: IP: 192.168.105.2
	I0610 09:22:08.048083    1637 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 09:22:10.069705    1637 machine.go:88] provisioning docker machine ...
	I0610 09:22:10.069788    1637 buildroot.go:166] provisioning hostname "addons-098000"
	I0610 09:22:10.070644    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.071570    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.071588    1637 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-098000 && echo "addons-098000" | sudo tee /etc/hostname
	I0610 09:22:10.164038    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-098000
	
	I0610 09:22:10.164160    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.164626    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.164641    1637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:22:10.239261    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:22:10.239281    1637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:22:10.239300    1637 buildroot.go:174] setting up certificates
	I0610 09:22:10.239307    1637 provision.go:83] configureAuth start
	I0610 09:22:10.239314    1637 provision.go:138] copyHostCerts
	I0610 09:22:10.239507    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:22:10.240632    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:22:10.241010    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:22:10.241260    1637 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.addons-098000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-098000]
	I0610 09:22:10.307069    1637 provision.go:172] copyRemoteCerts
	I0610 09:22:10.307140    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:22:10.307172    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.339991    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:22:10.346931    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 09:22:10.353742    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:22:10.360626    1637 provision.go:86] duration metric: configureAuth took 121.313416ms
	I0610 09:22:10.360639    1637 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:22:10.361002    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:10.361055    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.361272    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.361276    1637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:22:10.420194    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:22:10.420201    1637 buildroot.go:70] root file system type: tmpfs
	I0610 09:22:10.420251    1637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:22:10.420295    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.420542    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.420577    1637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:22:10.485025    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:22:10.485070    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.485298    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.485310    1637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:22:10.830569    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:22:10.830580    1637 machine.go:91] provisioned docker machine in 760.843209ms
	I0610 09:22:10.830585    1637 client.go:171] LocalClient.Create took 15.613176541s
	I0610 09:22:10.830594    1637 start.go:167] duration metric: libmachine.API.Create for "addons-098000" took 15.613236583s
	I0610 09:22:10.830598    1637 start.go:300] post-start starting for "addons-098000" (driver="qemu2")
	I0610 09:22:10.830601    1637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:22:10.830682    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:22:10.830692    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.862119    1637 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:22:10.863469    1637 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:22:10.863478    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:22:10.863540    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:22:10.863565    1637 start.go:303] post-start completed in 32.963459ms
	I0610 09:22:10.863901    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:22:10.864045    1637 start.go:128] duration metric: createHost completed in 16.018950083s
	I0610 09:22:10.864069    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.864287    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.864291    1637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:22:10.923434    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686414131.384712585
	
	I0610 09:22:10.923441    1637 fix.go:207] guest clock: 1686414131.384712585
	I0610 09:22:10.923446    1637 fix.go:220] Guest: 2023-06-10 09:22:11.384712585 -0700 PDT Remote: 2023-06-10 09:22:10.864048 -0700 PDT m=+16.118188126 (delta=520.664585ms)
	I0610 09:22:10.923456    1637 fix.go:191] guest clock delta is within tolerance: 520.664585ms
	I0610 09:22:10.923459    1637 start.go:83] releasing machines lock for "addons-098000", held for 16.0784145s
	I0610 09:22:10.923756    1637 ssh_runner.go:195] Run: cat /version.json
	I0610 09:22:10.923765    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.923833    1637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:22:10.923872    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:11.040251    1637 ssh_runner.go:195] Run: systemctl --version
	I0610 09:22:11.042905    1637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:22:11.045415    1637 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:22:11.045461    1637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:22:11.051643    1637 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:22:11.051653    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:11.051736    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:11.061365    1637 docker.go:633] Got preloaded images: 
	I0610 09:22:11.061374    1637 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:22:11.061418    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:11.064624    1637 ssh_runner.go:195] Run: which lz4
	I0610 09:22:11.066056    1637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:22:11.067511    1637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:22:11.067524    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:22:12.384653    1637 docker.go:597] Took 1.318649 seconds to copy over tarball
	I0610 09:22:12.384711    1637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:22:13.518722    1637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.133975834s)
	I0610 09:22:13.518746    1637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:22:13.534141    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:13.537423    1637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:22:13.542380    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:13.617910    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:15.783768    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165840375s)
	I0610 09:22:15.783797    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.783942    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.789136    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:22:15.792061    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:22:15.794990    1637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:22:15.795014    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:22:15.798511    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.801745    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:22:15.804884    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.807635    1637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:22:15.810661    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:22:15.814158    1637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:22:15.817306    1637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:22:15.819948    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:15.905204    1637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:22:15.910905    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.910988    1637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:22:15.916986    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.922219    1637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:22:15.929205    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.933866    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.938677    1637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:22:15.974269    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.979243    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.984512    1637 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:22:15.985792    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:22:15.988369    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:22:15.993006    1637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:22:16.073036    1637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:22:16.147707    1637 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:22:16.147726    1637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:22:16.152764    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:16.219604    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:17.389947    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170326875s)
	I0610 09:22:17.390012    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.468450    1637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:22:17.548751    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.629562    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.707590    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:22:17.714930    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.794794    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:22:17.819341    1637 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:22:17.819427    1637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:22:17.821557    1637 start.go:549] Will wait 60s for crictl version
	I0610 09:22:17.821591    1637 ssh_runner.go:195] Run: which crictl
	I0610 09:22:17.825207    1637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:22:17.842430    1637 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:22:17.842501    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.850299    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.866701    1637 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:22:17.866866    1637 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:22:17.868327    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.871885    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:17.871927    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.877489    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.877499    1637 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:22:17.877550    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.883143    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.883157    1637 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:22:17.883198    1637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:22:17.890410    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:17.890420    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:17.890445    1637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:22:17.890455    1637 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-098000 NodeName:addons-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:22:17.890526    1637 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-098000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:22:17.890573    1637 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:22:17.890631    1637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:22:17.893850    1637 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:22:17.893880    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:22:17.896724    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0610 09:22:17.901642    1637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:22:17.906483    1637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0610 09:22:17.911373    1637 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0610 09:22:17.912694    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.916067    1637 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000 for IP: 192.168.105.2
	I0610 09:22:17.916076    1637 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:17.916236    1637 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:22:18.022564    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt ...
	I0610 09:22:18.022569    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt: {Name:mk821d9de36f93438ad430683cb25e2f1c33c9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022803    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key ...
	I0610 09:22:18.022806    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key: {Name:mk750eea32c0b02b6ad84d81711cbfd77ceefe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022913    1637 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:22:18.159699    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt ...
	I0610 09:22:18.159708    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt: {Name:mk10e39bee2c5c6785228bc7733548a740243d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.159914    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key ...
	I0610 09:22:18.159917    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key: {Name:mk04d776031cd8d2755a757ba7736e35a9c25212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.160037    1637 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key
	I0610 09:22:18.160044    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt with IP's: []
	I0610 09:22:18.246526    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt ...
	I0610 09:22:18.246530    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: {Name:mk301aca75dad20ac385eb683aae1662edff3d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246697    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key ...
	I0610 09:22:18.246700    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key: {Name:mkdf4a2bc618a029a53fbd786e41dffe68b8316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246803    1637 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969
	I0610 09:22:18.246812    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:22:18.411436    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 ...
	I0610 09:22:18.411440    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969: {Name:mk922ab871b245e2b8e7e4b2a109a553fe1bcc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411596    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 ...
	I0610 09:22:18.411599    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969: {Name:mkdde2defc189629d0924fe6871b2adb52e47c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411697    1637 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt
	I0610 09:22:18.411933    1637 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key
	I0610 09:22:18.412033    1637 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key
	I0610 09:22:18.412047    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt with IP's: []
	I0610 09:22:18.578568    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt ...
	I0610 09:22:18.578583    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt: {Name:mkb4544f3ff14d84a98fd9ec92bfcdbb5d50e84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.578783    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key ...
	I0610 09:22:18.578786    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key: {Name:mk82ce3998197ea814bf8f591a5b4b56c617f405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.579030    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:22:18.579468    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:22:18.579491    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:22:18.579672    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:22:18.580285    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:22:18.587660    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:22:18.594728    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:22:18.602219    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:22:18.609690    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:22:18.617442    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:22:18.624297    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:22:18.631049    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:22:18.638070    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:22:18.644969    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:22:18.650094    1637 ssh_runner.go:195] Run: openssl version
	I0610 09:22:18.652167    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:22:18.655090    1637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656540    1637 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656561    1637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.658363    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:22:18.661572    1637 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:22:18.662872    1637 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:22:18.662908    1637 kubeadm.go:404] StartCluster: {Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:22:18.662975    1637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:22:18.668496    1637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:22:18.671389    1637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:22:18.674606    1637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:22:18.677626    1637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:22:18.677644    1637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:22:18.703158    1637 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:22:18.703188    1637 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:22:18.757797    1637 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:22:18.757860    1637 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:22:18.757910    1637 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:22:18.816123    1637 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:22:18.821365    1637 out.go:204]   - Generating certificates and keys ...
	I0610 09:22:18.821409    1637 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:22:18.821441    1637 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:22:19.085233    1637 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:22:19.181413    1637 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:22:19.330348    1637 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:22:19.412707    1637 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:22:19.604000    1637 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:22:19.604069    1637 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.814398    1637 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:22:19.814478    1637 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.907005    1637 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:22:20.056367    1637 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:22:20.125295    1637 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:22:20.125333    1637 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:22:20.241297    1637 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:22:20.330399    1637 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:22:20.489216    1637 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:22:20.764229    1637 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:22:20.771051    1637 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:22:20.771103    1637 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:22:20.771135    1637 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:22:20.859965    1637 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:22:20.864105    1637 out.go:204]   - Booting up control plane ...
	I0610 09:22:20.864178    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:22:20.864224    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:22:20.864257    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:22:20.864302    1637 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:22:20.865267    1637 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:22:24.366796    1637 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501337 seconds
	I0610 09:22:24.366861    1637 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:22:24.372204    1637 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:22:24.898455    1637 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:22:24.898779    1637 kubeadm.go:322] [mark-control-plane] Marking the node addons-098000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:22:25.404043    1637 kubeadm.go:322] [bootstrap-token] Using token: 8xmw5d.kvohdu7dlcpn05ob
	I0610 09:22:25.410608    1637 out.go:204]   - Configuring RBAC rules ...
	I0610 09:22:25.410669    1637 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:22:25.411737    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:22:25.418545    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:22:25.419904    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:22:25.421252    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:22:25.422283    1637 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:22:25.427205    1637 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:22:25.603958    1637 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:22:25.815834    1637 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:22:25.816185    1637 kubeadm.go:322] 
	I0610 09:22:25.816225    1637 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:22:25.816233    1637 kubeadm.go:322] 
	I0610 09:22:25.816291    1637 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:22:25.816295    1637 kubeadm.go:322] 
	I0610 09:22:25.816308    1637 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:22:25.816346    1637 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:22:25.816388    1637 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:22:25.816392    1637 kubeadm.go:322] 
	I0610 09:22:25.816425    1637 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:22:25.816430    1637 kubeadm.go:322] 
	I0610 09:22:25.816463    1637 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:22:25.816466    1637 kubeadm.go:322] 
	I0610 09:22:25.816508    1637 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:22:25.816560    1637 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:22:25.816602    1637 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:22:25.816605    1637 kubeadm.go:322] 
	I0610 09:22:25.816653    1637 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:22:25.816694    1637 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:22:25.816699    1637 kubeadm.go:322] 
	I0610 09:22:25.816749    1637 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.816801    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:22:25.816815    1637 kubeadm.go:322] 	--control-plane 
	I0610 09:22:25.816823    1637 kubeadm.go:322] 
	I0610 09:22:25.816880    1637 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:22:25.816883    1637 kubeadm.go:322] 
	I0610 09:22:25.816931    1637 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.817003    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:22:25.817072    1637 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:22:25.817175    1637 kubeadm.go:322] W0610 16:22:19.219117    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817283    1637 kubeadm.go:322] W0610 16:22:21.323610    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817294    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:25.817303    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:25.823848    1637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:22:25.826928    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:22:25.830443    1637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:22:25.836316    1637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:22:25.836378    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.836393    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-098000 minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.900338    1637 ops.go:34] apiserver oom_adj: -16
	I0610 09:22:25.900382    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.433306    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.933284    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.433115    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.933305    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.433535    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.933493    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.433524    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.932908    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.433563    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.933551    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.433517    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.933506    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.433459    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.933537    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.433223    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.933503    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.432603    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.933481    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.433267    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.933228    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.433253    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.933272    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.433226    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.933202    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.431772    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.933197    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.432078    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.482163    1637 kubeadm.go:1076] duration metric: took 13.645838667s to wait for elevateKubeSystemPrivileges.
	I0610 09:22:39.482178    1637 kubeadm.go:406] StartCluster complete in 20.819301625s
	I0610 09:22:39.482188    1637 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482341    1637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:22:39.482516    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482746    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:22:39.482786    1637 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 09:22:39.482870    1637 addons.go:66] Setting volumesnapshots=true in profile "addons-098000"
	I0610 09:22:39.482872    1637 addons.go:66] Setting inspektor-gadget=true in profile "addons-098000"
	I0610 09:22:39.482879    1637 addons.go:228] Setting addon volumesnapshots=true in "addons-098000"
	I0610 09:22:39.482922    1637 addons.go:66] Setting registry=true in profile "addons-098000"
	I0610 09:22:39.482902    1637 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-098000"
	I0610 09:22:39.482936    1637 addons.go:228] Setting addon registry=true in "addons-098000"
	I0610 09:22:39.482958    1637 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:39.482880    1637 addons.go:228] Setting addon inspektor-gadget=true in "addons-098000"
	I0610 09:22:39.482979    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482984    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482878    1637 addons.go:66] Setting gcp-auth=true in profile "addons-098000"
	I0610 09:22:39.483016    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483020    1637 mustload.go:65] Loading cluster: addons-098000
	I0610 09:22:39.483034    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483276    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.483275    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.482885    1637 addons.go:66] Setting ingress=true in profile "addons-098000"
	I0610 09:22:39.483383    1637 addons.go:228] Setting addon ingress=true in "addons-098000"
	I0610 09:22:39.483423    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.483508    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483523    1637 addons.go:274] "addons-098000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0610 09:22:39.483525    1637 addons.go:464] Verifying addon registry=true in "addons-098000"
	W0610 09:22:39.483511    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483543    1637 addons.go:274] "addons-098000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0610 09:22:39.487787    1637 out.go:177] * Verifying registry addon...
	I0610 09:22:39.482886    1637 addons.go:66] Setting default-storageclass=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting cloud-spanner=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting ingress-dns=true in profile "addons-098000"
	I0610 09:22:39.482892    1637 addons.go:66] Setting storage-provisioner=true in profile "addons-098000"
	I0610 09:22:39.482899    1637 addons.go:66] Setting metrics-server=true in profile "addons-098000"
	W0610 09:22:39.483773    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483867    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.484558    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.494895    1637 addons.go:228] Setting addon ingress-dns=true in "addons-098000"
	I0610 09:22:39.494904    1637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-098000"
	I0610 09:22:39.494907    1637 addons.go:228] Setting addon metrics-server=true in "addons-098000"
	I0610 09:22:39.494911    1637 addons.go:228] Setting addon cloud-spanner=true in "addons-098000"
	I0610 09:22:39.494913    1637 addons.go:228] Setting addon storage-provisioner=true in "addons-098000"
	W0610 09:22:39.494917    1637 addons.go:274] "addons-098000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0610 09:22:39.494920    1637 addons.go:274] "addons-098000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0610 09:22:39.495382    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 09:22:39.500831    1637 addons.go:464] Verifying addon ingress=true in "addons-098000"
	I0610 09:22:39.500842    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500849    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.504830    1637 out.go:177] * Verifying ingress addon...
	I0610 09:22:39.500952    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500997    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 09:22:39.501041    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.501118    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.514859    1637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0610 09:22:39.511954    1637 addons.go:274] "addons-098000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0610 09:22:39.512421    1637 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 09:22:39.517592    1637 addons.go:228] Setting addon default-storageclass=true in "addons-098000"
	I0610 09:22:39.517921    1637 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.518096    1637 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 09:22:39.521871    1637 addons.go:464] Verifying addon metrics-server=true in "addons-098000"
	I0610 09:22:39.527803    1637 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 09:22:39.528897    1637 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 09:22:39.533879    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:22:39.533885    1637 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 09:22:39.533900    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.539950    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 09:22:39.549899    1637 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.549908    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 09:22:39.549915    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540014    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540659    1637 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.550015    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:22:39.550019    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.552885    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 09:22:39.545910    1637 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.547022    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:22:39.555818    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 09:22:39.555836    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.558872    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 09:22:39.563787    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 09:22:39.565032    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 09:22:39.576758    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 09:22:39.585719    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 09:22:39.588857    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 09:22:39.588866    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 09:22:39.588875    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.610676    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.641637    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.644621    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.683769    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.740787    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 09:22:39.740799    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 09:22:39.840307    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 09:22:39.840321    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 09:22:39.985655    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 09:22:39.985667    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 09:22:40.064364    1637 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-098000" context rescaled to 1 replicas
	I0610 09:22:40.064382    1637 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:22:40.068539    1637 out.go:177] * Verifying Kubernetes components...
	I0610 09:22:40.077600    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:40.261757    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 09:22:40.261768    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 09:22:40.290415    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 09:22:40.290425    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 09:22:40.300542    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 09:22:40.300551    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 09:22:40.308642    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 09:22:40.308652    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 09:22:40.313342    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 09:22:40.313353    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 09:22:40.318717    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 09:22:40.318725    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 09:22:40.323460    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.323466    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 09:22:40.335717    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.661069    1637 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105262875s)
	I0610 09:22:40.661101    1637 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:22:40.737190    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.12650025s)
	I0610 09:22:40.873352    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23170025s)
	I0610 09:22:40.873360    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228730125s)
	I0610 09:22:40.873397    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.189617792s)
	I0610 09:22:40.873843    1637 node_ready.go:35] waiting up to 6m0s for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875337    1637 node_ready.go:49] node "addons-098000" has status "Ready":"True"
	I0610 09:22:40.875343    1637 node_ready.go:38] duration metric: took 1.493375ms waiting for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875346    1637 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:40.878632    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881351    1637 pod_ready.go:92] pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:40.881360    1637 pod_ready.go:81] duration metric: took 2.720875ms waiting for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881363    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:41.422744    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.08700475s)
	I0610 09:22:41.422764    1637 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:41.429025    1637 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 09:22:41.436428    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 09:22:41.441210    1637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 09:22:41.441218    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:41.945707    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.446004    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.891987    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:42.949163    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.445226    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.945705    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.445736    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.893909    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:44.949633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.445855    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.945805    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.106349    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 09:22:46.106363    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.140536    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 09:22:46.145624    1637 addons.go:228] Setting addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.145643    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:46.146378    1637 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 09:22:46.146386    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.179928    1637 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 09:22:46.183883    1637 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 09:22:46.187898    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 09:22:46.187903    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 09:22:46.192588    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 09:22:46.192594    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 09:22:46.199251    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.199256    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 09:22:46.204462    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.429785    1637 addons.go:464] Verifying addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.434320    1637 out.go:177] * Verifying gcp-auth addon...
	I0610 09:22:46.440768    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 09:22:46.443515    1637 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 09:22:46.443521    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:46.446140    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949654    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.389319    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:47.445303    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.446055    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:47.946177    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.946875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.446743    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.447103    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.945711    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.946918    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.389715    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:49.445862    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.448994    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.945095    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.945638    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.446626    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.446936    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.887650    1637 pod_ready.go:97] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887663    1637 pod_ready.go:81] duration metric: took 10.00631125s waiting for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	E0610 09:22:50.887668    1637 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887672    1637 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890299    1637 pod_ready.go:92] pod "etcd-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.890307    1637 pod_ready.go:81] duration metric: took 2.63175ms waiting for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890310    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892694    1637 pod_ready.go:92] pod "kube-apiserver-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.892699    1637 pod_ready.go:81] duration metric: took 2.386083ms waiting for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892703    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895043    1637 pod_ready.go:92] pod "kube-controller-manager-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.895049    1637 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895053    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897341    1637 pod_ready.go:92] pod "kube-proxy-jpnqh" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.897346    1637 pod_ready.go:81] duration metric: took 2.29075ms waiting for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897350    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.945358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.946279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.288420    1637 pod_ready.go:92] pod "kube-scheduler-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:51.288430    1637 pod_ready.go:81] duration metric: took 391.078333ms waiting for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:51.288436    1637 pod_ready.go:38] duration metric: took 10.413098792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:51.288445    1637 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:22:51.288516    1637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:22:51.295818    1637 api_server.go:72] duration metric: took 11.231423584s to wait for apiserver process to appear ...
	I0610 09:22:51.295824    1637 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:22:51.295831    1637 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0610 09:22:51.299125    1637 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0610 09:22:51.299826    1637 api_server.go:141] control plane version: v1.27.2
	I0610 09:22:51.299832    1637 api_server.go:131] duration metric: took 4.005625ms to wait for apiserver health ...
	I0610 09:22:51.299835    1637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:22:51.445314    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:51.446212    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.490284    1637 system_pods.go:59] 11 kube-system pods found
	I0610 09:22:51.490295    1637 system_pods.go:61] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.490299    1637 system_pods.go:61] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.490303    1637 system_pods.go:61] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.490306    1637 system_pods.go:61] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.490311    1637 system_pods.go:61] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.490314    1637 system_pods.go:61] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.490317    1637 system_pods.go:61] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.490320    1637 system_pods.go:61] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.490323    1637 system_pods.go:61] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.490325    1637 system_pods.go:61] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.490336    1637 system_pods.go:61] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.490341    1637 system_pods.go:74] duration metric: took 190.503333ms to wait for pod list to return data ...
	I0610 09:22:51.490345    1637 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:22:51.687921    1637 default_sa.go:45] found service account: "default"
	I0610 09:22:51.687931    1637 default_sa.go:55] duration metric: took 197.581625ms for default service account to be created ...
	I0610 09:22:51.687935    1637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:22:51.890310    1637 system_pods.go:86] 11 kube-system pods found
	I0610 09:22:51.890320    1637 system_pods.go:89] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.890326    1637 system_pods.go:89] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.890330    1637 system_pods.go:89] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.890333    1637 system_pods.go:89] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.890336    1637 system_pods.go:89] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.890338    1637 system_pods.go:89] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.890341    1637 system_pods.go:89] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.890344    1637 system_pods.go:89] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.890349    1637 system_pods.go:89] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.890351    1637 system_pods.go:89] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.890354    1637 system_pods.go:89] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.890357    1637 system_pods.go:126] duration metric: took 202.419584ms to wait for k8s-apps to be running ...
	I0610 09:22:51.890363    1637 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:22:51.890418    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:51.897401    1637 system_svc.go:56] duration metric: took 7.035125ms WaitForService to wait for kubelet.
	I0610 09:22:51.897410    1637 kubeadm.go:581] duration metric: took 11.8330175s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:22:51.897420    1637 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:22:51.944537    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.945311    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.087254    1637 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:22:52.087281    1637 node_conditions.go:123] node cpu capacity is 2
	I0610 09:22:52.087290    1637 node_conditions.go:105] duration metric: took 189.867833ms to run NodePressure ...
	I0610 09:22:52.087295    1637 start.go:228] waiting for startup goroutines ...
	I0610 09:22:52.445279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:52.445610    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.945799    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.946052    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.445389    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.446014    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.945473    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.946237    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.446325    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.448382    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.948114    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.951263    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.447181    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.447511    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.945501    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.946418    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.445349    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.445910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.945410    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.946065    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.447469    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.448009    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.945353    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.946520    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454959    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.946148    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.947450    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.446206    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:00.447700    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.944434    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.945129    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.445646    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.446643    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:01.945710    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.947152    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.450730    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.454285    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.952960    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.955376    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.446358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.447878    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.945294    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.946290    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.445145    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.446164    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.946364    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.946514    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.449729    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.453690    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.947873    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.950281    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.445562    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.445795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.946136    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.947509    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.445951    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.446633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.945814    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.946157    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.446086    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.446099    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.970991    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.971383    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.448620    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:09.449087    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.946728    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.948250    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446827    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446978    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:10.945421    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.945732    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.444797    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.445621    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.948926    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.949262    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.452305    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:12.453786    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.948653    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.949795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.445378    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.446558    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:13.946404    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.946644    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.446073    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.446331    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.946569    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.946725    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.445689    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.446865    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.947373    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.948973    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.445756    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:16.446819    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.944171    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.945088    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.448798    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.450089    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:17.952301    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.955532    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.945244    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.946363    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.445300    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:19.445962    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944002    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944781    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.446084    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:20.446223    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.952440    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.954313    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.445625    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.446916    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.945782    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.947236    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.445836    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.446162    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.945365    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.946169    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.449820    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:23.452877    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.953442    1637 kapi.go:107] duration metric: took 37.512712584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 09:23:23.958122    1637 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-098000 cluster.
	I0610 09:23:23.957179    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.961932    1637 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 09:23:23.965925    1637 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 09:23:24.450360    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:24.945980    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.445712    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.946008    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.446034    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.950257    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.454943    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.956882    1637 kapi.go:107] duration metric: took 46.520505042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 09:28:39.510321    1637 kapi.go:107] duration metric: took 6m0.007516916s to wait for kubernetes.io/minikube-addons=registry ...
	W0610 09:28:39.510625    1637 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0610 09:28:39.531369    1637 kapi.go:107] duration metric: took 6m0.011549375s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0610 09:28:39.531491    1637 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0610 09:28:39.539250    1637 out.go:177] * Enabled addons: volumesnapshots, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, gcp-auth, csi-hostpath-driver
	I0610 09:28:39.545184    1637 addons.go:499] enable addons completed in 6m0.055013834s: enabled=[volumesnapshots inspektor-gadget metrics-server cloud-spanner storage-provisioner default-storageclass ingress-dns gcp-auth csi-hostpath-driver]
	I0610 09:28:39.545227    1637 start.go:233] waiting for cluster config update ...
	I0610 09:28:39.545256    1637 start.go:242] writing updated cluster config ...
	I0610 09:28:39.546371    1637 ssh_runner.go:195] Run: rm -f paused
	I0610 09:28:39.689843    1637 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:28:39.694186    1637 out.go:177] 
	W0610 09:28:39.697254    1637 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:28:39.701213    1637 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:28:39.709228    1637 out.go:177] * Done! kubectl is now configured to use "addons-098000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:40 UTC. --
	Jun 10 16:28:38 addons-098000 dockerd[939]: time="2023-06-10T16:28:38.780840261Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940455787Z" level=info msg="shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940486162Z" level=warning msg="cleaning up after shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940492579Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.940636785Z" level=info msg="ignoring event" container=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.998241480Z" level=info msg="ignoring event" container=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998855056Z" level=info msg="shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998883722Z" level=warning msg="cleaning up after shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998888056Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737483784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737542367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737565825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737574241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[933]: time="2023-06-10T16:33:46.778804028Z" level=info msg="ignoring event" container=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779017026Z" level=info msg="shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779051484Z" level=warning msg="cleaning up after shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779056025Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747938173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747997298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748248087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748452960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[933]: time="2023-06-10T16:38:54.805196171Z" level=info msg="ignoring event" container=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805369210Z" level=info msg="shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805425252Z" level=warning msg="cleaning up after shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805429585Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID
	6877b2a4c1b8b       1499ed4fbd0aa                                                                                                                                About a minute ago   Exited              minikube-ingress-dns                     8                   8e5b404496c4e
	23a8cae6443cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago       Running             csi-snapshotter                          0                   567c041b8040d
	1a73024f59864       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 17 minutes ago       Running             gcp-auth                                 0                   d8f3043938a40
	3fa8701fda26c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          17 minutes ago       Running             csi-provisioner                          0                   567c041b8040d
	aafd1d61dfe4b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            17 minutes ago       Running             liveness-probe                           0                   567c041b8040d
	2b6767dfbe9d3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           17 minutes ago       Running             hostpath                                 0                   567c041b8040d
	8f02984364568       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                17 minutes ago       Running             node-driver-registrar                    0                   567c041b8040d
	868cfa9fcba69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   17 minutes ago       Running             csi-external-health-monitor-controller   0                   567c041b8040d
	26cfafca2bb0d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              17 minutes ago       Running             csi-resizer                              0                   a78a427783820
	c58c2d26acda8       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             17 minutes ago       Running             csi-attacher                             0                   674b1cd12ae30
	46105da82f67a       ba04bb24b9575                                                                                                                                17 minutes ago       Running             storage-provisioner                      0                   67c7765a9fa6e
	de0a71571f8d0       29921a0845422                                                                                                                                18 minutes ago       Running             kube-proxy                               0                   2bc9129027615
	adfb52103967f       97e04611ad434                                                                                                                                18 minutes ago       Running             coredns                                  0                   d428f978de558
	335475d795fcf       305d7ed1dae28                                                                                                                                18 minutes ago       Running             kube-scheduler                           0                   31fdcf4abeef0
	3dcf946c301ce       2ee705380c3c5                                                                                                                                18 minutes ago       Running             kube-controller-manager                  0                   9fed8ca4bd2f8
	74423d2dab41d       72c9df6be7f1b                                                                                                                                18 minutes ago       Running             kube-apiserver                           0                   11d78b6999216
	2a81bf4413e12       24bc64e911039                                                                                                                                18 minutes ago       Running             etcd                                     0                   a20e51a803c8c
	
	* 
	* ==> coredns [adfb52103967] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46766 - 38334 "HINFO IN 1120296007274907072.5268654669647465865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004199511s
	[INFO] 10.244.0.10:39208 - 36576 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125s
	[INFO] 10.244.0.10:59425 - 64759 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155334s
	[INFO] 10.244.0.10:33915 - 19077 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037167s
	[INFO] 10.244.0.10:46994 - 65166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002725s
	[INFO] 10.244.0.10:46598 - 37414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043625s
	[INFO] 10.244.0.10:55204 - 18019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000032792s
	[INFO] 10.244.0.10:60613 - 7185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000939127s
	[INFO] 10.244.0.10:40293 - 55849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00103996s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-098000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-098000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-098000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:40:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 43359b33bc0f4b9c9610dd4ec5308f62
	  System UUID:                43359b33bc0f4b9c9610dd4ec5308f62
	  Boot ID:                    eb81fa5c-fe8f-47ab-b5e5-9f5fe2e987b0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-jkcxn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5d78c9869d-f2tnn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpathplugin-pjvh6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-addons-098000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-098000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-098000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-jpnqh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-098000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-098000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-098000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-098000 event: Registered Node addons-098000 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun10 16:22] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.696014] EINJ: EINJ table not found.
	[  +0.658239] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043798] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000807] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.876165] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.071972] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.924516] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +2.288987] systemd-fstab-generator[866]: Ignoring "noauto" for root device
	[  +0.165983] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +0.077870] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.072149] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +1.146266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099605] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
	[  +0.082038] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.080513] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.078963] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.086582] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	[  +3.056689] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.651414] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[ +14.757696] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.157496] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.873848] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 16:23] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [2a81bf4413e1] <==
	* {"level":"info","ts":"2023-06-10T16:22:22.463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-098000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:32:22.450Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":974,"took":"2.490131ms","hash":4035340276}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035340276,"revision":974,"compact-revision":-1}
	{"level":"info","ts":"2023-06-10T16:37:22.461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1290}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1290,"took":"1.421443ms","hash":2326989487}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2326989487,"revision":1290,"compact-revision":974}
	
	* 
	* ==> gcp-auth [1a73024f5986] <==
	* 2023/06/10 16:23:23 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  16:40:40 up 18 min,  0 users,  load average: 0.60, 0.54, 0.41
	Linux addons-098000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74423d2dab41] <==
	* I0610 16:22:23.642323       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:22:23.642356       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:22:23.657792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:22:24.401560       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:22:24.563279       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:22:24.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:22:24.568654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:22:24.720978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:22:24.731371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:22:24.801810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:22:24.805350       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0610 16:22:24.806303       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:22:24.807740       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:22:25.583035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:22:26.059225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:22:26.063878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:22:26.068513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:22:39.217505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:22:39.917252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:22:40.754199       1 alloc.go:330] "allocated clusterIPs" service="default/cloud-spanner-emulator" clusterIPs=map[IPv4:10.99.222.169]
	I0610 16:22:41.357691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.106.85.14]
	I0610 16:22:41.362266       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0610 16:22:41.419673       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.111.90.60]
	I0610 16:22:46.394399       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.102.46.8]
	I0610 16:22:46.411449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [3dcf946c301c] <==
	* I0610 16:22:46.441438       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.444358       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.468051       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:09.211557       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:09.224222       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.225842       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.320708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.244592       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:11.258467       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.330708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.333357       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.335850       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:11.335887       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.336870       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.345682       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.251101       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.256393       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.263577       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:12.263691       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.265671       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.266556       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:41.027747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:41.050836       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:42.013412       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:42.047992       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [de0a71571f8d] <==
	* I0610 16:22:40.477801       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0610 16:22:40.477968       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0610 16:22:40.477988       1 server_others.go:551] "Using iptables proxy"
	I0610 16:22:40.508315       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:22:40.508325       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:22:40.508357       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:22:40.508608       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:22:40.508614       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:22:40.509861       1 config.go:188] "Starting service config controller"
	I0610 16:22:40.509869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:22:40.509881       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:22:40.509882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:22:40.511342       1 config.go:315] "Starting node config controller"
	I0610 16:22:40.511347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:22:40.609918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:22:40.609943       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:22:40.611397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [335475d795fc] <==
	* W0610 16:22:23.606482       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:22:23.606891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:22:23.606959       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:22:23.606982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:22:23.607008       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607026       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607067       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:23.607087       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:23.607166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:23.607268       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.463642       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:22:24.463731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:22:24.485768       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:22:24.485809       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:22:24.588161       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:24.588197       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:24.600064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:24.600158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.604631       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:22:24.604651       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:22:24.616055       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:22:24.616131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:22:27.098734       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:40 UTC. --
	Jun 10 16:38:55 addons-098000 kubelet[2091]: I0610 16:38:55.162422    2091 scope.go:115] "RemoveContainer" containerID="865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4"
	Jun 10 16:38:55 addons-098000 kubelet[2091]: I0610 16:38:55.162726    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:38:55 addons-098000 kubelet[2091]: E0610 16:38:55.162992    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:07 addons-098000 kubelet[2091]: I0610 16:39:07.681259    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:07 addons-098000 kubelet[2091]: E0610 16:39:07.682935    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:19 addons-098000 kubelet[2091]: I0610 16:39:19.682302    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:19 addons-098000 kubelet[2091]: E0610 16:39:19.683995    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:25 addons-098000 kubelet[2091]: E0610 16:39:25.689415    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:39:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:39:32 addons-098000 kubelet[2091]: I0610 16:39:32.680792    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:32 addons-098000 kubelet[2091]: E0610 16:39:32.681284    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:47 addons-098000 kubelet[2091]: I0610 16:39:47.681883    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:47 addons-098000 kubelet[2091]: E0610 16:39:47.684253    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:02 addons-098000 kubelet[2091]: I0610 16:40:02.681991    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:02 addons-098000 kubelet[2091]: E0610 16:40:02.683097    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:13 addons-098000 kubelet[2091]: I0610 16:40:13.680609    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:13 addons-098000 kubelet[2091]: E0610 16:40:13.680899    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:25 addons-098000 kubelet[2091]: E0610 16:40:25.787611    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:40:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:40:27 addons-098000 kubelet[2091]: I0610 16:40:27.681515    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:27 addons-098000 kubelet[2091]: E0610 16:40:27.682723    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	
	* 
	* ==> storage-provisioner [46105da82f67] <==
	* I0610 16:22:41.552997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:22:41.564566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:22:41.564604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:22:41.567070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:22:41.567242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b8b8b2f-e69f-4abd-8693-9c0a331852aa", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-098000_976d826c-217e-4d0d-87e7-e825dd783783 became leader
	I0610 16:22:41.567336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	I0610 16:22:41.668274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (721.04s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-098000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-098000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (38.091375ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-098000 -n addons-098000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-098000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | --download-only -p             | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | binary-mirror-025000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000        | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | -p addons-098000               | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:28 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:28 PDT | 10 Jun 23 09:28 PDT |
	|         | addons-098000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:54.764352    1637 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:54.764757    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764761    1637 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:54.764764    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764861    1637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:21:54.766294    1637 out.go:303] Setting JSON to false
	I0610 09:21:54.781540    1637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1285,"bootTime":1686412829,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:54.781615    1637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:54.786460    1637 out.go:177] * [addons-098000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:54.793542    1637 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:21:54.798440    1637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:54.793561    1637 notify.go:220] Checking for updates...
	I0610 09:21:54.804413    1637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:54.807450    1637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:54.810460    1637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:21:54.811765    1637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:21:54.814627    1637 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:54.818412    1637 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:21:54.823426    1637 start.go:297] selected driver: qemu2
	I0610 09:21:54.823432    1637 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:54.823441    1637 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:21:54.825256    1637 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:54.828578    1637 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:21:54.831535    1637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:21:54.831554    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:21:54.831575    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:54.831579    1637 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:21:54.831586    1637 start_flags.go:319] config:
	{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:54.831700    1637 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:54.840445    1637 out.go:177] * Starting control plane node addons-098000 in cluster addons-098000
	I0610 09:21:54.844425    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:54.844451    1637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:54.844469    1637 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:54.844530    1637 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:21:54.844535    1637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:54.844735    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:21:54.844750    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json: {Name:mkfbe060a3258f68fbe8b01ce26e4a7ada2f24f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:54.844947    1637 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:21:54.844969    1637 start.go:364] acquiring machines lock for addons-098000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:21:54.845063    1637 start.go:368] acquired machines lock for "addons-098000" in 89.292µs
	I0610 09:21:54.845075    1637 start.go:93] Provisioning new machine with config: &{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:21:54.845115    1637 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:21:54.853376    1637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:21:55.217388    1637 start.go:159] libmachine.API.Create for "addons-098000" (driver="qemu2")
	I0610 09:21:55.217427    1637 client.go:168] LocalClient.Create starting
	I0610 09:21:55.217549    1637 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:21:55.301145    1637 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:21:55.414002    1637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:21:55.826273    1637 main.go:141] libmachine: Creating SSH key...
	I0610 09:21:55.859428    1637 main.go:141] libmachine: Creating Disk image...
	I0610 09:21:55.859434    1637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:21:55.859612    1637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.941560    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:55.941581    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.941655    1637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2 +20000M
	I0610 09:21:55.948999    1637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:21:55.949013    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.949042    1637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.949049    1637 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:21:55.949080    1637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e2:60:7a:4e:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:56.034280    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:56.034334    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:56.034338    1637 main.go:141] libmachine: Attempt 0
	I0610 09:21:56.034355    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:21:58.036587    1637 main.go:141] libmachine: Attempt 1
	I0610 09:21:58.036664    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:00.038868    1637 main.go:141] libmachine: Attempt 2
	I0610 09:22:00.038909    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:02.040980    1637 main.go:141] libmachine: Attempt 3
	I0610 09:22:02.040996    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:04.043076    1637 main.go:141] libmachine: Attempt 4
	I0610 09:22:04.043113    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:06.045175    1637 main.go:141] libmachine: Attempt 5
	I0610 09:22:06.045200    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047388    1637 main.go:141] libmachine: Attempt 6
	I0610 09:22:08.047472    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047875    1637 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 09:22:08.047987    1637 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6485f4af}
	I0610 09:22:08.048012    1637 main.go:141] libmachine: Found match: c2:e2:60:7a:4e:46
	I0610 09:22:08.048053    1637 main.go:141] libmachine: IP: 192.168.105.2
	I0610 09:22:08.048083    1637 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 09:22:10.069705    1637 machine.go:88] provisioning docker machine ...
	I0610 09:22:10.069788    1637 buildroot.go:166] provisioning hostname "addons-098000"
	I0610 09:22:10.070644    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.071570    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.071588    1637 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-098000 && echo "addons-098000" | sudo tee /etc/hostname
	I0610 09:22:10.164038    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-098000
	
	I0610 09:22:10.164160    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.164626    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.164641    1637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:22:10.239261    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:22:10.239281    1637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:22:10.239300    1637 buildroot.go:174] setting up certificates
	I0610 09:22:10.239307    1637 provision.go:83] configureAuth start
	I0610 09:22:10.239314    1637 provision.go:138] copyHostCerts
	I0610 09:22:10.239507    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:22:10.240632    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:22:10.241010    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:22:10.241260    1637 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.addons-098000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-098000]
	I0610 09:22:10.307069    1637 provision.go:172] copyRemoteCerts
	I0610 09:22:10.307140    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:22:10.307172    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.339991    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:22:10.346931    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 09:22:10.353742    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:22:10.360626    1637 provision.go:86] duration metric: configureAuth took 121.313416ms
	I0610 09:22:10.360639    1637 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:22:10.361002    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:10.361055    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.361272    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.361276    1637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:22:10.420194    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:22:10.420201    1637 buildroot.go:70] root file system type: tmpfs
	I0610 09:22:10.420251    1637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:22:10.420295    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.420542    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.420577    1637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:22:10.485025    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:22:10.485070    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.485298    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.485310    1637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:22:10.830569    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:22:10.830580    1637 machine.go:91] provisioned docker machine in 760.843209ms
	I0610 09:22:10.830585    1637 client.go:171] LocalClient.Create took 15.613176541s
	I0610 09:22:10.830594    1637 start.go:167] duration metric: libmachine.API.Create for "addons-098000" took 15.613236583s
	I0610 09:22:10.830598    1637 start.go:300] post-start starting for "addons-098000" (driver="qemu2")
	I0610 09:22:10.830601    1637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:22:10.830682    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:22:10.830692    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.862119    1637 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:22:10.863469    1637 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:22:10.863478    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:22:10.863540    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:22:10.863565    1637 start.go:303] post-start completed in 32.963459ms
	I0610 09:22:10.863901    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:22:10.864045    1637 start.go:128] duration metric: createHost completed in 16.018950083s
	I0610 09:22:10.864069    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.864287    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.864291    1637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:22:10.923434    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686414131.384712585
	
	I0610 09:22:10.923441    1637 fix.go:207] guest clock: 1686414131.384712585
	I0610 09:22:10.923446    1637 fix.go:220] Guest: 2023-06-10 09:22:11.384712585 -0700 PDT Remote: 2023-06-10 09:22:10.864048 -0700 PDT m=+16.118188126 (delta=520.664585ms)
	I0610 09:22:10.923456    1637 fix.go:191] guest clock delta is within tolerance: 520.664585ms
	I0610 09:22:10.923459    1637 start.go:83] releasing machines lock for "addons-098000", held for 16.0784145s
	I0610 09:22:10.923756    1637 ssh_runner.go:195] Run: cat /version.json
	I0610 09:22:10.923765    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.923833    1637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:22:10.923872    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:11.040251    1637 ssh_runner.go:195] Run: systemctl --version
	I0610 09:22:11.042905    1637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:22:11.045415    1637 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:22:11.045461    1637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:22:11.051643    1637 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:22:11.051653    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:11.051736    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:11.061365    1637 docker.go:633] Got preloaded images: 
	I0610 09:22:11.061374    1637 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:22:11.061418    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:11.064624    1637 ssh_runner.go:195] Run: which lz4
	I0610 09:22:11.066056    1637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:22:11.067511    1637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:22:11.067524    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:22:12.384653    1637 docker.go:597] Took 1.318649 seconds to copy over tarball
	I0610 09:22:12.384711    1637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:22:13.518722    1637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.133975834s)
	I0610 09:22:13.518746    1637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:22:13.534141    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:13.537423    1637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:22:13.542380    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:13.617910    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:15.783768    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165840375s)
	I0610 09:22:15.783797    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.783942    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.789136    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:22:15.792061    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:22:15.794990    1637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:22:15.795014    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:22:15.798511    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.801745    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:22:15.804884    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.807635    1637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:22:15.810661    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:22:15.814158    1637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:22:15.817306    1637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:22:15.819948    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:15.905204    1637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:22:15.910905    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.910988    1637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:22:15.916986    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.922219    1637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:22:15.929205    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.933866    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.938677    1637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:22:15.974269    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.979243    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.984512    1637 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:22:15.985792    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:22:15.988369    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:22:15.993006    1637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:22:16.073036    1637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:22:16.147707    1637 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:22:16.147726    1637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:22:16.152764    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:16.219604    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:17.389947    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170326875s)
	I0610 09:22:17.390012    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.468450    1637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:22:17.548751    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.629562    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.707590    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:22:17.714930    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.794794    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:22:17.819341    1637 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:22:17.819427    1637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:22:17.821557    1637 start.go:549] Will wait 60s for crictl version
	I0610 09:22:17.821591    1637 ssh_runner.go:195] Run: which crictl
	I0610 09:22:17.825207    1637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:22:17.842430    1637 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:22:17.842501    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.850299    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.866701    1637 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:22:17.866866    1637 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:22:17.868327    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.871885    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:17.871927    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.877489    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.877499    1637 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:22:17.877550    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.883143    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.883157    1637 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:22:17.883198    1637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:22:17.890410    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:17.890420    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:17.890445    1637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:22:17.890455    1637 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-098000 NodeName:addons-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:22:17.890526    1637 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-098000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:22:17.890573    1637 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:22:17.890631    1637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:22:17.893850    1637 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:22:17.893880    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:22:17.896724    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0610 09:22:17.901642    1637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:22:17.906483    1637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0610 09:22:17.911373    1637 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0610 09:22:17.912694    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.916067    1637 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000 for IP: 192.168.105.2
	I0610 09:22:17.916076    1637 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:17.916236    1637 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:22:18.022564    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt ...
	I0610 09:22:18.022569    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt: {Name:mk821d9de36f93438ad430683cb25e2f1c33c9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022803    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key ...
	I0610 09:22:18.022806    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key: {Name:mk750eea32c0b02b6ad84d81711cbfd77ceefe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022913    1637 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:22:18.159699    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt ...
	I0610 09:22:18.159708    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt: {Name:mk10e39bee2c5c6785228bc7733548a740243d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.159914    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key ...
	I0610 09:22:18.159917    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key: {Name:mk04d776031cd8d2755a757ba7736e35a9c25212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.160037    1637 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key
	I0610 09:22:18.160044    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt with IP's: []
	I0610 09:22:18.246526    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt ...
	I0610 09:22:18.246530    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: {Name:mk301aca75dad20ac385eb683aae1662edff3d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246697    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key ...
	I0610 09:22:18.246700    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key: {Name:mkdf4a2bc618a029a53fbd786e41dffe68b8316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246803    1637 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969
	I0610 09:22:18.246812    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:22:18.411436    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 ...
	I0610 09:22:18.411440    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969: {Name:mk922ab871b245e2b8e7e4b2a109a553fe1bcc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411596    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 ...
	I0610 09:22:18.411599    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969: {Name:mkdde2defc189629d0924fe6871b2adb52e47c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411697    1637 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt
	I0610 09:22:18.411933    1637 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key
	I0610 09:22:18.412033    1637 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key
	I0610 09:22:18.412047    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt with IP's: []
	I0610 09:22:18.578568    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt ...
	I0610 09:22:18.578583    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt: {Name:mkb4544f3ff14d84a98fd9ec92bfcdbb5d50e84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.578783    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key ...
	I0610 09:22:18.578786    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key: {Name:mk82ce3998197ea814bf8f591a5b4b56c617f405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.579030    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:22:18.579468    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:22:18.579491    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:22:18.579672    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:22:18.580285    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:22:18.587660    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:22:18.594728    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:22:18.602219    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:22:18.609690    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:22:18.617442    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:22:18.624297    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:22:18.631049    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:22:18.638070    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:22:18.644969    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:22:18.650094    1637 ssh_runner.go:195] Run: openssl version
	I0610 09:22:18.652167    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:22:18.655090    1637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656540    1637 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656561    1637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.658363    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:22:18.661572    1637 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:22:18.662872    1637 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:22:18.662908    1637 kubeadm.go:404] StartCluster: {Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:22:18.662975    1637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:22:18.668496    1637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:22:18.671389    1637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:22:18.674606    1637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:22:18.677626    1637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:22:18.677644    1637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:22:18.703158    1637 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:22:18.703188    1637 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:22:18.757797    1637 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:22:18.757860    1637 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:22:18.757910    1637 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:22:18.816123    1637 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:22:18.821365    1637 out.go:204]   - Generating certificates and keys ...
	I0610 09:22:18.821409    1637 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:22:18.821441    1637 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:22:19.085233    1637 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:22:19.181413    1637 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:22:19.330348    1637 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:22:19.412707    1637 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:22:19.604000    1637 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:22:19.604069    1637 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.814398    1637 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:22:19.814478    1637 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.907005    1637 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:22:20.056367    1637 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:22:20.125295    1637 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:22:20.125333    1637 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:22:20.241297    1637 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:22:20.330399    1637 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:22:20.489216    1637 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:22:20.764229    1637 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:22:20.771051    1637 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:22:20.771103    1637 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:22:20.771135    1637 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:22:20.859965    1637 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:22:20.864105    1637 out.go:204]   - Booting up control plane ...
	I0610 09:22:20.864178    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:22:20.864224    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:22:20.864257    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:22:20.864302    1637 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:22:20.865267    1637 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:22:24.366796    1637 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501337 seconds
	I0610 09:22:24.366861    1637 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:22:24.372204    1637 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:22:24.898455    1637 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:22:24.898779    1637 kubeadm.go:322] [mark-control-plane] Marking the node addons-098000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:22:25.404043    1637 kubeadm.go:322] [bootstrap-token] Using token: 8xmw5d.kvohdu7dlcpn05ob
	I0610 09:22:25.410608    1637 out.go:204]   - Configuring RBAC rules ...
	I0610 09:22:25.410669    1637 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:22:25.411737    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:22:25.418545    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:22:25.419904    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:22:25.421252    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:22:25.422283    1637 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:22:25.427205    1637 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:22:25.603958    1637 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:22:25.815834    1637 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:22:25.816185    1637 kubeadm.go:322] 
	I0610 09:22:25.816225    1637 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:22:25.816233    1637 kubeadm.go:322] 
	I0610 09:22:25.816291    1637 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:22:25.816295    1637 kubeadm.go:322] 
	I0610 09:22:25.816308    1637 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:22:25.816346    1637 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:22:25.816388    1637 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:22:25.816392    1637 kubeadm.go:322] 
	I0610 09:22:25.816425    1637 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:22:25.816430    1637 kubeadm.go:322] 
	I0610 09:22:25.816463    1637 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:22:25.816466    1637 kubeadm.go:322] 
	I0610 09:22:25.816508    1637 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:22:25.816560    1637 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:22:25.816602    1637 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:22:25.816605    1637 kubeadm.go:322] 
	I0610 09:22:25.816653    1637 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:22:25.816694    1637 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:22:25.816699    1637 kubeadm.go:322] 
	I0610 09:22:25.816749    1637 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.816801    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:22:25.816815    1637 kubeadm.go:322] 	--control-plane 
	I0610 09:22:25.816823    1637 kubeadm.go:322] 
	I0610 09:22:25.816880    1637 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:22:25.816883    1637 kubeadm.go:322] 
	I0610 09:22:25.816931    1637 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.817003    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:22:25.817072    1637 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:22:25.817175    1637 kubeadm.go:322] W0610 16:22:19.219117    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817283    1637 kubeadm.go:322] W0610 16:22:21.323610    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817294    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:25.817303    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:25.823848    1637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:22:25.826928    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:22:25.830443    1637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:22:25.836316    1637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:22:25.836378    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.836393    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-098000 minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.900338    1637 ops.go:34] apiserver oom_adj: -16
	I0610 09:22:25.900382    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.433306    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.933284    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.433115    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.933305    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.433535    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.933493    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.433524    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.932908    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.433563    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.933551    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.433517    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.933506    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.433459    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.933537    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.433223    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.933503    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.432603    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.933481    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.433267    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.933228    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.433253    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.933272    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.433226    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.933202    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.431772    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.933197    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.432078    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.482163    1637 kubeadm.go:1076] duration metric: took 13.645838667s to wait for elevateKubeSystemPrivileges.
	I0610 09:22:39.482178    1637 kubeadm.go:406] StartCluster complete in 20.819301625s
	I0610 09:22:39.482188    1637 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482341    1637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:22:39.482516    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482746    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:22:39.482786    1637 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 09:22:39.482870    1637 addons.go:66] Setting volumesnapshots=true in profile "addons-098000"
	I0610 09:22:39.482872    1637 addons.go:66] Setting inspektor-gadget=true in profile "addons-098000"
	I0610 09:22:39.482879    1637 addons.go:228] Setting addon volumesnapshots=true in "addons-098000"
	I0610 09:22:39.482922    1637 addons.go:66] Setting registry=true in profile "addons-098000"
	I0610 09:22:39.482902    1637 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-098000"
	I0610 09:22:39.482936    1637 addons.go:228] Setting addon registry=true in "addons-098000"
	I0610 09:22:39.482958    1637 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:39.482880    1637 addons.go:228] Setting addon inspektor-gadget=true in "addons-098000"
	I0610 09:22:39.482979    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482984    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482878    1637 addons.go:66] Setting gcp-auth=true in profile "addons-098000"
	I0610 09:22:39.483016    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483020    1637 mustload.go:65] Loading cluster: addons-098000
	I0610 09:22:39.483034    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483276    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.483275    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.482885    1637 addons.go:66] Setting ingress=true in profile "addons-098000"
	I0610 09:22:39.483383    1637 addons.go:228] Setting addon ingress=true in "addons-098000"
	I0610 09:22:39.483423    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.483508    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483523    1637 addons.go:274] "addons-098000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0610 09:22:39.483525    1637 addons.go:464] Verifying addon registry=true in "addons-098000"
	W0610 09:22:39.483511    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483543    1637 addons.go:274] "addons-098000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0610 09:22:39.487787    1637 out.go:177] * Verifying registry addon...
	I0610 09:22:39.482886    1637 addons.go:66] Setting default-storageclass=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting cloud-spanner=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting ingress-dns=true in profile "addons-098000"
	I0610 09:22:39.482892    1637 addons.go:66] Setting storage-provisioner=true in profile "addons-098000"
	I0610 09:22:39.482899    1637 addons.go:66] Setting metrics-server=true in profile "addons-098000"
	W0610 09:22:39.483773    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483867    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.484558    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.494895    1637 addons.go:228] Setting addon ingress-dns=true in "addons-098000"
	I0610 09:22:39.494904    1637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-098000"
	I0610 09:22:39.494907    1637 addons.go:228] Setting addon metrics-server=true in "addons-098000"
	I0610 09:22:39.494911    1637 addons.go:228] Setting addon cloud-spanner=true in "addons-098000"
	I0610 09:22:39.494913    1637 addons.go:228] Setting addon storage-provisioner=true in "addons-098000"
	W0610 09:22:39.494917    1637 addons.go:274] "addons-098000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0610 09:22:39.494920    1637 addons.go:274] "addons-098000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0610 09:22:39.495382    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 09:22:39.500831    1637 addons.go:464] Verifying addon ingress=true in "addons-098000"
	I0610 09:22:39.500842    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500849    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.504830    1637 out.go:177] * Verifying ingress addon...
	I0610 09:22:39.500952    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500997    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 09:22:39.501041    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.501118    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.514859    1637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0610 09:22:39.511954    1637 addons.go:274] "addons-098000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0610 09:22:39.512421    1637 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 09:22:39.517592    1637 addons.go:228] Setting addon default-storageclass=true in "addons-098000"
	I0610 09:22:39.517921    1637 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.518096    1637 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 09:22:39.521871    1637 addons.go:464] Verifying addon metrics-server=true in "addons-098000"
	I0610 09:22:39.527803    1637 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 09:22:39.528897    1637 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 09:22:39.533879    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:22:39.533885    1637 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 09:22:39.533900    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.539950    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 09:22:39.549899    1637 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.549908    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 09:22:39.549915    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540014    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540659    1637 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.550015    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:22:39.550019    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.552885    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 09:22:39.545910    1637 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.547022    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:22:39.555818    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 09:22:39.555836    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.558872    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 09:22:39.563787    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 09:22:39.565032    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 09:22:39.576758    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 09:22:39.585719    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 09:22:39.588857    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 09:22:39.588866    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 09:22:39.588875    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.610676    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.641637    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.644621    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.683769    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.740787    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 09:22:39.740799    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 09:22:39.840307    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 09:22:39.840321    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 09:22:39.985655    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 09:22:39.985667    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 09:22:40.064364    1637 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-098000" context rescaled to 1 replicas
	I0610 09:22:40.064382    1637 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:22:40.068539    1637 out.go:177] * Verifying Kubernetes components...
	I0610 09:22:40.077600    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:40.261757    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 09:22:40.261768    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 09:22:40.290415    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 09:22:40.290425    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 09:22:40.300542    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 09:22:40.300551    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 09:22:40.308642    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 09:22:40.308652    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 09:22:40.313342    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 09:22:40.313353    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 09:22:40.318717    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 09:22:40.318725    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 09:22:40.323460    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.323466    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 09:22:40.335717    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.661069    1637 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105262875s)
	I0610 09:22:40.661101    1637 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:22:40.737190    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.12650025s)
	I0610 09:22:40.873352    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23170025s)
	I0610 09:22:40.873360    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228730125s)
	I0610 09:22:40.873397    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.189617792s)
	I0610 09:22:40.873843    1637 node_ready.go:35] waiting up to 6m0s for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875337    1637 node_ready.go:49] node "addons-098000" has status "Ready":"True"
	I0610 09:22:40.875343    1637 node_ready.go:38] duration metric: took 1.493375ms waiting for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875346    1637 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:40.878632    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881351    1637 pod_ready.go:92] pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:40.881360    1637 pod_ready.go:81] duration metric: took 2.720875ms waiting for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881363    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:41.422744    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.08700475s)
	I0610 09:22:41.422764    1637 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:41.429025    1637 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 09:22:41.436428    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 09:22:41.441210    1637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 09:22:41.441218    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:41.945707    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.446004    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.891987    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:42.949163    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.445226    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.945705    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.445736    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.893909    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:44.949633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.445855    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.945805    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.106349    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 09:22:46.106363    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.140536    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 09:22:46.145624    1637 addons.go:228] Setting addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.145643    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:46.146378    1637 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 09:22:46.146386    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.179928    1637 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 09:22:46.183883    1637 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 09:22:46.187898    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 09:22:46.187903    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 09:22:46.192588    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 09:22:46.192594    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 09:22:46.199251    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.199256    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 09:22:46.204462    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.429785    1637 addons.go:464] Verifying addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.434320    1637 out.go:177] * Verifying gcp-auth addon...
	I0610 09:22:46.440768    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 09:22:46.443515    1637 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 09:22:46.443521    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:46.446140    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949654    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.389319    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:47.445303    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.446055    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:47.946177    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.946875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.446743    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.447103    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.945711    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.946918    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.389715    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:49.445862    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.448994    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.945095    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.945638    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.446626    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.446936    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.887650    1637 pod_ready.go:97] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887663    1637 pod_ready.go:81] duration metric: took 10.00631125s waiting for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	E0610 09:22:50.887668    1637 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887672    1637 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890299    1637 pod_ready.go:92] pod "etcd-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.890307    1637 pod_ready.go:81] duration metric: took 2.63175ms waiting for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890310    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892694    1637 pod_ready.go:92] pod "kube-apiserver-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.892699    1637 pod_ready.go:81] duration metric: took 2.386083ms waiting for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892703    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895043    1637 pod_ready.go:92] pod "kube-controller-manager-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.895049    1637 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895053    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897341    1637 pod_ready.go:92] pod "kube-proxy-jpnqh" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.897346    1637 pod_ready.go:81] duration metric: took 2.29075ms waiting for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897350    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.945358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.946279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.288420    1637 pod_ready.go:92] pod "kube-scheduler-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:51.288430    1637 pod_ready.go:81] duration metric: took 391.078333ms waiting for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:51.288436    1637 pod_ready.go:38] duration metric: took 10.413098792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:51.288445    1637 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:22:51.288516    1637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:22:51.295818    1637 api_server.go:72] duration metric: took 11.231423584s to wait for apiserver process to appear ...
	I0610 09:22:51.295824    1637 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:22:51.295831    1637 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0610 09:22:51.299125    1637 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0610 09:22:51.299826    1637 api_server.go:141] control plane version: v1.27.2
	I0610 09:22:51.299832    1637 api_server.go:131] duration metric: took 4.005625ms to wait for apiserver health ...
	I0610 09:22:51.299835    1637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:22:51.445314    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:51.446212    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.490284    1637 system_pods.go:59] 11 kube-system pods found
	I0610 09:22:51.490295    1637 system_pods.go:61] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.490299    1637 system_pods.go:61] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.490303    1637 system_pods.go:61] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.490306    1637 system_pods.go:61] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.490311    1637 system_pods.go:61] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.490314    1637 system_pods.go:61] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.490317    1637 system_pods.go:61] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.490320    1637 system_pods.go:61] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.490323    1637 system_pods.go:61] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.490325    1637 system_pods.go:61] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.490336    1637 system_pods.go:61] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.490341    1637 system_pods.go:74] duration metric: took 190.503333ms to wait for pod list to return data ...
	I0610 09:22:51.490345    1637 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:22:51.687921    1637 default_sa.go:45] found service account: "default"
	I0610 09:22:51.687931    1637 default_sa.go:55] duration metric: took 197.581625ms for default service account to be created ...
	I0610 09:22:51.687935    1637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:22:51.890310    1637 system_pods.go:86] 11 kube-system pods found
	I0610 09:22:51.890320    1637 system_pods.go:89] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.890326    1637 system_pods.go:89] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.890330    1637 system_pods.go:89] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.890333    1637 system_pods.go:89] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.890336    1637 system_pods.go:89] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.890338    1637 system_pods.go:89] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.890341    1637 system_pods.go:89] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.890344    1637 system_pods.go:89] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.890349    1637 system_pods.go:89] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.890351    1637 system_pods.go:89] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.890354    1637 system_pods.go:89] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.890357    1637 system_pods.go:126] duration metric: took 202.419584ms to wait for k8s-apps to be running ...
	I0610 09:22:51.890363    1637 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:22:51.890418    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:51.897401    1637 system_svc.go:56] duration metric: took 7.035125ms WaitForService to wait for kubelet.
	I0610 09:22:51.897410    1637 kubeadm.go:581] duration metric: took 11.8330175s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:22:51.897420    1637 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:22:51.944537    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.945311    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.087254    1637 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:22:52.087281    1637 node_conditions.go:123] node cpu capacity is 2
	I0610 09:22:52.087290    1637 node_conditions.go:105] duration metric: took 189.867833ms to run NodePressure ...
	I0610 09:22:52.087295    1637 start.go:228] waiting for startup goroutines ...
	I0610 09:22:52.445279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:52.445610    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.945799    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.946052    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.445389    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.446014    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.945473    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.946237    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.446325    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.448382    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.948114    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.951263    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.447181    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.447511    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.945501    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.946418    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.445349    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.445910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.945410    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.946065    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.447469    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.448009    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.945353    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.946520    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454959    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.946148    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.947450    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.446206    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:00.447700    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.944434    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.945129    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.445646    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.446643    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:01.945710    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.947152    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.450730    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.454285    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.952960    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.955376    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.446358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.447878    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.945294    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.946290    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.445145    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.446164    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.946364    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.946514    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.449729    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.453690    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.947873    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.950281    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.445562    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.445795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.946136    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.947509    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.445951    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.446633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.945814    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.946157    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.446086    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.446099    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.970991    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.971383    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.448620    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:09.449087    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.946728    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.948250    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446827    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446978    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:10.945421    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.945732    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.444797    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.445621    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.948926    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.949262    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.452305    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:12.453786    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.948653    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.949795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.445378    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.446558    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:13.946404    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.946644    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.446073    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.446331    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.946569    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.946725    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.445689    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.446865    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.947373    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.948973    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.445756    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:16.446819    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.944171    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.945088    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.448798    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.450089    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:17.952301    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.955532    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.945244    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.946363    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.445300    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:19.445962    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944002    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944781    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.446084    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:20.446223    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.952440    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.954313    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.445625    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.446916    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.945782    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.947236    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.445836    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.446162    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.945365    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.946169    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.449820    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:23.452877    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.953442    1637 kapi.go:107] duration metric: took 37.512712584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 09:23:23.958122    1637 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-098000 cluster.
	I0610 09:23:23.957179    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.961932    1637 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 09:23:23.965925    1637 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 09:23:24.450360    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:24.945980    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.445712    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.946008    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.446034    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.950257    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.454943    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.956882    1637 kapi.go:107] duration metric: took 46.520505042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 09:28:39.510321    1637 kapi.go:107] duration metric: took 6m0.007516916s to wait for kubernetes.io/minikube-addons=registry ...
	W0610 09:28:39.510625    1637 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0610 09:28:39.531369    1637 kapi.go:107] duration metric: took 6m0.011549375s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0610 09:28:39.531491    1637 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0610 09:28:39.539250    1637 out.go:177] * Enabled addons: volumesnapshots, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, gcp-auth, csi-hostpath-driver
	I0610 09:28:39.545184    1637 addons.go:499] enable addons completed in 6m0.055013834s: enabled=[volumesnapshots inspektor-gadget metrics-server cloud-spanner storage-provisioner default-storageclass ingress-dns gcp-auth csi-hostpath-driver]
	I0610 09:28:39.545227    1637 start.go:233] waiting for cluster config update ...
	I0610 09:28:39.545256    1637 start.go:242] writing updated cluster config ...
	I0610 09:28:39.546371    1637 ssh_runner.go:195] Run: rm -f paused
	I0610 09:28:39.689843    1637 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:28:39.694186    1637 out.go:177] 
	W0610 09:28:39.697254    1637 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:28:39.701213    1637 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:28:39.709228    1637 out.go:177] * Done! kubectl is now configured to use "addons-098000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:46 UTC. --
	Jun 10 16:28:38 addons-098000 dockerd[939]: time="2023-06-10T16:28:38.780840261Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940455787Z" level=info msg="shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940486162Z" level=warning msg="cleaning up after shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940492579Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.940636785Z" level=info msg="ignoring event" container=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.998241480Z" level=info msg="ignoring event" container=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998855056Z" level=info msg="shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998883722Z" level=warning msg="cleaning up after shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998888056Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737483784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737542367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737565825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737574241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[933]: time="2023-06-10T16:33:46.778804028Z" level=info msg="ignoring event" container=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779017026Z" level=info msg="shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779051484Z" level=warning msg="cleaning up after shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779056025Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747938173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747997298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748248087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748452960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[933]: time="2023-06-10T16:38:54.805196171Z" level=info msg="ignoring event" container=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805369210Z" level=info msg="shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805425252Z" level=warning msg="cleaning up after shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805429585Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID
	6877b2a4c1b8b       1499ed4fbd0aa                                                                                                                                About a minute ago   Exited              minikube-ingress-dns                     8                   8e5b404496c4e
	23a8cae6443cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago       Running             csi-snapshotter                          0                   567c041b8040d
	1a73024f59864       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 17 minutes ago       Running             gcp-auth                                 0                   d8f3043938a40
	3fa8701fda26c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          17 minutes ago       Running             csi-provisioner                          0                   567c041b8040d
	aafd1d61dfe4b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            17 minutes ago       Running             liveness-probe                           0                   567c041b8040d
	2b6767dfbe9d3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           17 minutes ago       Running             hostpath                                 0                   567c041b8040d
	8f02984364568       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                17 minutes ago       Running             node-driver-registrar                    0                   567c041b8040d
	868cfa9fcba69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   17 minutes ago       Running             csi-external-health-monitor-controller   0                   567c041b8040d
	26cfafca2bb0d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              17 minutes ago       Running             csi-resizer                              0                   a78a427783820
	c58c2d26acda8       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             17 minutes ago       Running             csi-attacher                             0                   674b1cd12ae30
	46105da82f67a       ba04bb24b9575                                                                                                                                18 minutes ago       Running             storage-provisioner                      0                   67c7765a9fa6e
	de0a71571f8d0       29921a0845422                                                                                                                                18 minutes ago       Running             kube-proxy                               0                   2bc9129027615
	adfb52103967f       97e04611ad434                                                                                                                                18 minutes ago       Running             coredns                                  0                   d428f978de558
	335475d795fcf       305d7ed1dae28                                                                                                                                18 minutes ago       Running             kube-scheduler                           0                   31fdcf4abeef0
	3dcf946c301ce       2ee705380c3c5                                                                                                                                18 minutes ago       Running             kube-controller-manager                  0                   9fed8ca4bd2f8
	74423d2dab41d       72c9df6be7f1b                                                                                                                                18 minutes ago       Running             kube-apiserver                           0                   11d78b6999216
	2a81bf4413e12       24bc64e911039                                                                                                                                18 minutes ago       Running             etcd                                     0                   a20e51a803c8c
	
	* 
	* ==> coredns [adfb52103967] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46766 - 38334 "HINFO IN 1120296007274907072.5268654669647465865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004199511s
	[INFO] 10.244.0.10:39208 - 36576 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125s
	[INFO] 10.244.0.10:59425 - 64759 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155334s
	[INFO] 10.244.0.10:33915 - 19077 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037167s
	[INFO] 10.244.0.10:46994 - 65166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002725s
	[INFO] 10.244.0.10:46598 - 37414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043625s
	[INFO] 10.244.0.10:55204 - 18019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000032792s
	[INFO] 10.244.0.10:60613 - 7185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000939127s
	[INFO] 10.244.0.10:40293 - 55849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00103996s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-098000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-098000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-098000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:40:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 43359b33bc0f4b9c9610dd4ec5308f62
	  System UUID:                43359b33bc0f4b9c9610dd4ec5308f62
	  Boot ID:                    eb81fa5c-fe8f-47ab-b5e5-9f5fe2e987b0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-jkcxn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-5d78c9869d-f2tnn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 csi-hostpathplugin-pjvh6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-addons-098000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-098000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-098000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-jpnqh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-098000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-098000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-098000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-098000 event: Registered Node addons-098000 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun10 16:22] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.696014] EINJ: EINJ table not found.
	[  +0.658239] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043798] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000807] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.876165] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.071972] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.924516] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +2.288987] systemd-fstab-generator[866]: Ignoring "noauto" for root device
	[  +0.165983] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +0.077870] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.072149] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +1.146266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099605] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
	[  +0.082038] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.080513] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.078963] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.086582] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	[  +3.056689] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.651414] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[ +14.757696] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.157496] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.873848] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 16:23] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [2a81bf4413e1] <==
	* {"level":"info","ts":"2023-06-10T16:22:22.463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-098000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:32:22.450Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":974,"took":"2.490131ms","hash":4035340276}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035340276,"revision":974,"compact-revision":-1}
	{"level":"info","ts":"2023-06-10T16:37:22.461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1290}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1290,"took":"1.421443ms","hash":2326989487}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2326989487,"revision":1290,"compact-revision":974}
	
	* 
	* ==> gcp-auth [1a73024f5986] <==
	* 2023/06/10 16:23:23 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  16:40:46 up 18 min,  0 users,  load average: 0.59, 0.54, 0.41
	Linux addons-098000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74423d2dab41] <==
	* I0610 16:22:23.642323       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:22:23.642356       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:22:23.657792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:22:24.401560       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:22:24.563279       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:22:24.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:22:24.568654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:22:24.720978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:22:24.731371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:22:24.801810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:22:24.805350       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0610 16:22:24.806303       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:22:24.807740       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:22:25.583035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:22:26.059225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:22:26.063878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:22:26.068513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:22:39.217505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:22:39.917252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:22:40.754199       1 alloc.go:330] "allocated clusterIPs" service="default/cloud-spanner-emulator" clusterIPs=map[IPv4:10.99.222.169]
	I0610 16:22:41.357691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.106.85.14]
	I0610 16:22:41.362266       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0610 16:22:41.419673       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.111.90.60]
	I0610 16:22:46.394399       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.102.46.8]
	I0610 16:22:46.411449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [3dcf946c301c] <==
	* I0610 16:22:46.441438       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.444358       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.468051       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:09.211557       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:09.224222       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.225842       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.320708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.244592       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:11.258467       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.330708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.333357       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.335850       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:11.335887       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.336870       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.345682       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.251101       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.256393       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.263577       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:12.263691       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.265671       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.266556       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:41.027747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:41.050836       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:42.013412       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:42.047992       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [de0a71571f8d] <==
	* I0610 16:22:40.477801       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0610 16:22:40.477968       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0610 16:22:40.477988       1 server_others.go:551] "Using iptables proxy"
	I0610 16:22:40.508315       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:22:40.508325       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:22:40.508357       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:22:40.508608       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:22:40.508614       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:22:40.509861       1 config.go:188] "Starting service config controller"
	I0610 16:22:40.509869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:22:40.509881       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:22:40.509882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:22:40.511342       1 config.go:315] "Starting node config controller"
	I0610 16:22:40.511347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:22:40.609918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:22:40.609943       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:22:40.611397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [335475d795fc] <==
	* W0610 16:22:23.606482       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:22:23.606891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:22:23.606959       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:22:23.606982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:22:23.607008       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607026       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607067       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:23.607087       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:23.607166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:23.607268       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.463642       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:22:24.463731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:22:24.485768       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:22:24.485809       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:22:24.588161       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:24.588197       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:24.600064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:24.600158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.604631       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:22:24.604651       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:22:24.616055       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:22:24.616131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:22:27.098734       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:46 UTC. --
	Jun 10 16:38:55 addons-098000 kubelet[2091]: E0610 16:38:55.162992    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:07 addons-098000 kubelet[2091]: I0610 16:39:07.681259    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:07 addons-098000 kubelet[2091]: E0610 16:39:07.682935    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:19 addons-098000 kubelet[2091]: I0610 16:39:19.682302    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:19 addons-098000 kubelet[2091]: E0610 16:39:19.683995    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:25 addons-098000 kubelet[2091]: E0610 16:39:25.689415    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:39:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:39:32 addons-098000 kubelet[2091]: I0610 16:39:32.680792    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:32 addons-098000 kubelet[2091]: E0610 16:39:32.681284    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:47 addons-098000 kubelet[2091]: I0610 16:39:47.681883    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:47 addons-098000 kubelet[2091]: E0610 16:39:47.684253    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:02 addons-098000 kubelet[2091]: I0610 16:40:02.681991    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:02 addons-098000 kubelet[2091]: E0610 16:40:02.683097    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:13 addons-098000 kubelet[2091]: I0610 16:40:13.680609    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:13 addons-098000 kubelet[2091]: E0610 16:40:13.680899    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:25 addons-098000 kubelet[2091]: E0610 16:40:25.787611    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:40:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:40:27 addons-098000 kubelet[2091]: I0610 16:40:27.681515    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:27 addons-098000 kubelet[2091]: E0610 16:40:27.682723    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:41 addons-098000 kubelet[2091]: I0610 16:40:41.681329    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:41 addons-098000 kubelet[2091]: E0610 16:40:41.682958    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	
	* 
	* ==> storage-provisioner [46105da82f67] <==
	* I0610 16:22:41.552997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:22:41.564566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:22:41.564604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:22:41.567070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:22:41.567242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b8b8b2f-e69f-4abd-8693-9c0a331852aa", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-098000_976d826c-217e-4d0d-87e7-e825dd783783 became leader
	I0610 16:22:41.567336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	I0610 16:22:41.668274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-06-10 09:48:40.893311 -0700 PDT m=+1651.884722126
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-098000 -n addons-098000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-098000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | --download-only -p             | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | binary-mirror-025000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000        | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | -p addons-098000               | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:28 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:28 PDT | 10 Jun 23 09:28 PDT |
	|         | addons-098000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:40 PDT |
	|         | -p addons-098000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:54.764352    1637 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:54.764757    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764761    1637 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:54.764764    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764861    1637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:21:54.766294    1637 out.go:303] Setting JSON to false
	I0610 09:21:54.781540    1637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1285,"bootTime":1686412829,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:54.781615    1637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:54.786460    1637 out.go:177] * [addons-098000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:54.793542    1637 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:21:54.798440    1637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:54.793561    1637 notify.go:220] Checking for updates...
	I0610 09:21:54.804413    1637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:54.807450    1637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:54.810460    1637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:21:54.811765    1637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:21:54.814627    1637 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:54.818412    1637 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:21:54.823426    1637 start.go:297] selected driver: qemu2
	I0610 09:21:54.823432    1637 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:54.823441    1637 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:21:54.825256    1637 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:54.828578    1637 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:21:54.831535    1637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:21:54.831554    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:21:54.831575    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:54.831579    1637 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:21:54.831586    1637 start_flags.go:319] config:
	{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:54.831700    1637 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:54.840445    1637 out.go:177] * Starting control plane node addons-098000 in cluster addons-098000
	I0610 09:21:54.844425    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:54.844451    1637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:54.844469    1637 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:54.844530    1637 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:21:54.844535    1637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:54.844735    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:21:54.844750    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json: {Name:mkfbe060a3258f68fbe8b01ce26e4a7ada2f24f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:54.844947    1637 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:21:54.844969    1637 start.go:364] acquiring machines lock for addons-098000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:21:54.845063    1637 start.go:368] acquired machines lock for "addons-098000" in 89.292µs
	I0610 09:21:54.845075    1637 start.go:93] Provisioning new machine with config: &{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:21:54.845115    1637 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:21:54.853376    1637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:21:55.217388    1637 start.go:159] libmachine.API.Create for "addons-098000" (driver="qemu2")
	I0610 09:21:55.217427    1637 client.go:168] LocalClient.Create starting
	I0610 09:21:55.217549    1637 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:21:55.301145    1637 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:21:55.414002    1637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:21:55.826273    1637 main.go:141] libmachine: Creating SSH key...
	I0610 09:21:55.859428    1637 main.go:141] libmachine: Creating Disk image...
	I0610 09:21:55.859434    1637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:21:55.859612    1637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.941560    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:55.941581    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.941655    1637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2 +20000M
	I0610 09:21:55.948999    1637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:21:55.949013    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.949042    1637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.949049    1637 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:21:55.949080    1637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e2:60:7a:4e:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:56.034280    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:56.034334    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:56.034338    1637 main.go:141] libmachine: Attempt 0
	I0610 09:21:56.034355    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:21:58.036587    1637 main.go:141] libmachine: Attempt 1
	I0610 09:21:58.036664    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:00.038868    1637 main.go:141] libmachine: Attempt 2
	I0610 09:22:00.038909    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:02.040980    1637 main.go:141] libmachine: Attempt 3
	I0610 09:22:02.040996    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:04.043076    1637 main.go:141] libmachine: Attempt 4
	I0610 09:22:04.043113    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:06.045175    1637 main.go:141] libmachine: Attempt 5
	I0610 09:22:06.045200    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047388    1637 main.go:141] libmachine: Attempt 6
	I0610 09:22:08.047472    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047875    1637 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 09:22:08.047987    1637 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6485f4af}
	I0610 09:22:08.048012    1637 main.go:141] libmachine: Found match: c2:e2:60:7a:4e:46
	I0610 09:22:08.048053    1637 main.go:141] libmachine: IP: 192.168.105.2
	I0610 09:22:08.048083    1637 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 09:22:10.069705    1637 machine.go:88] provisioning docker machine ...
	I0610 09:22:10.069788    1637 buildroot.go:166] provisioning hostname "addons-098000"
	I0610 09:22:10.070644    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.071570    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.071588    1637 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-098000 && echo "addons-098000" | sudo tee /etc/hostname
	I0610 09:22:10.164038    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-098000
	
	I0610 09:22:10.164160    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.164626    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.164641    1637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:22:10.239261    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:22:10.239281    1637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:22:10.239300    1637 buildroot.go:174] setting up certificates
	I0610 09:22:10.239307    1637 provision.go:83] configureAuth start
	I0610 09:22:10.239314    1637 provision.go:138] copyHostCerts
	I0610 09:22:10.239507    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:22:10.240632    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:22:10.241010    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:22:10.241260    1637 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.addons-098000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-098000]
	I0610 09:22:10.307069    1637 provision.go:172] copyRemoteCerts
	I0610 09:22:10.307140    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:22:10.307172    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.339991    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:22:10.346931    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 09:22:10.353742    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:22:10.360626    1637 provision.go:86] duration metric: configureAuth took 121.313416ms
	I0610 09:22:10.360639    1637 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:22:10.361002    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:10.361055    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.361272    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.361276    1637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:22:10.420194    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:22:10.420201    1637 buildroot.go:70] root file system type: tmpfs
	I0610 09:22:10.420251    1637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:22:10.420295    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.420542    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.420577    1637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:22:10.485025    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:22:10.485070    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.485298    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.485310    1637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:22:10.830569    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:22:10.830580    1637 machine.go:91] provisioned docker machine in 760.843209ms
	I0610 09:22:10.830585    1637 client.go:171] LocalClient.Create took 15.613176541s
	I0610 09:22:10.830594    1637 start.go:167] duration metric: libmachine.API.Create for "addons-098000" took 15.613236583s
	I0610 09:22:10.830598    1637 start.go:300] post-start starting for "addons-098000" (driver="qemu2")
	I0610 09:22:10.830601    1637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:22:10.830682    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:22:10.830692    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.862119    1637 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:22:10.863469    1637 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:22:10.863478    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:22:10.863540    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:22:10.863565    1637 start.go:303] post-start completed in 32.963459ms
	I0610 09:22:10.863901    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:22:10.864045    1637 start.go:128] duration metric: createHost completed in 16.018950083s
	I0610 09:22:10.864069    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.864287    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.864291    1637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:22:10.923434    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686414131.384712585
	
	I0610 09:22:10.923441    1637 fix.go:207] guest clock: 1686414131.384712585
	I0610 09:22:10.923446    1637 fix.go:220] Guest: 2023-06-10 09:22:11.384712585 -0700 PDT Remote: 2023-06-10 09:22:10.864048 -0700 PDT m=+16.118188126 (delta=520.664585ms)
	I0610 09:22:10.923456    1637 fix.go:191] guest clock delta is within tolerance: 520.664585ms
	I0610 09:22:10.923459    1637 start.go:83] releasing machines lock for "addons-098000", held for 16.0784145s
	I0610 09:22:10.923756    1637 ssh_runner.go:195] Run: cat /version.json
	I0610 09:22:10.923765    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.923833    1637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:22:10.923872    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:11.040251    1637 ssh_runner.go:195] Run: systemctl --version
	I0610 09:22:11.042905    1637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:22:11.045415    1637 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:22:11.045461    1637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:22:11.051643    1637 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:22:11.051653    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:11.051736    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:11.061365    1637 docker.go:633] Got preloaded images: 
	I0610 09:22:11.061374    1637 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:22:11.061418    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:11.064624    1637 ssh_runner.go:195] Run: which lz4
	I0610 09:22:11.066056    1637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:22:11.067511    1637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:22:11.067524    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:22:12.384653    1637 docker.go:597] Took 1.318649 seconds to copy over tarball
	I0610 09:22:12.384711    1637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:22:13.518722    1637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.133975834s)
	I0610 09:22:13.518746    1637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:22:13.534141    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:13.537423    1637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:22:13.542380    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:13.617910    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:15.783768    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165840375s)
	I0610 09:22:15.783797    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.783942    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.789136    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:22:15.792061    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:22:15.794990    1637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:22:15.795014    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:22:15.798511    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.801745    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:22:15.804884    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.807635    1637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:22:15.810661    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:22:15.814158    1637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:22:15.817306    1637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:22:15.819948    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:15.905204    1637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:22:15.910905    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.910988    1637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:22:15.916986    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.922219    1637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:22:15.929205    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.933866    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.938677    1637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:22:15.974269    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.979243    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.984512    1637 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:22:15.985792    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:22:15.988369    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:22:15.993006    1637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:22:16.073036    1637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:22:16.147707    1637 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:22:16.147726    1637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:22:16.152764    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:16.219604    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:17.389947    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170326875s)
	I0610 09:22:17.390012    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.468450    1637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:22:17.548751    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.629562    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.707590    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:22:17.714930    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.794794    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:22:17.819341    1637 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:22:17.819427    1637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:22:17.821557    1637 start.go:549] Will wait 60s for crictl version
	I0610 09:22:17.821591    1637 ssh_runner.go:195] Run: which crictl
	I0610 09:22:17.825207    1637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:22:17.842430    1637 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:22:17.842501    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.850299    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.866701    1637 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:22:17.866866    1637 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:22:17.868327    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.871885    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:17.871927    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.877489    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.877499    1637 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:22:17.877550    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.883143    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.883157    1637 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:22:17.883198    1637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:22:17.890410    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:17.890420    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:17.890445    1637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:22:17.890455    1637 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-098000 NodeName:addons-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:22:17.890526    1637 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-098000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:22:17.890573    1637 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:22:17.890631    1637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:22:17.893850    1637 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:22:17.893880    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:22:17.896724    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0610 09:22:17.901642    1637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:22:17.906483    1637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0610 09:22:17.911373    1637 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0610 09:22:17.912694    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.916067    1637 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000 for IP: 192.168.105.2
	I0610 09:22:17.916076    1637 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:17.916236    1637 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:22:18.022564    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt ...
	I0610 09:22:18.022569    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt: {Name:mk821d9de36f93438ad430683cb25e2f1c33c9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022803    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key ...
	I0610 09:22:18.022806    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key: {Name:mk750eea32c0b02b6ad84d81711cbfd77ceefe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022913    1637 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:22:18.159699    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt ...
	I0610 09:22:18.159708    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt: {Name:mk10e39bee2c5c6785228bc7733548a740243d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.159914    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key ...
	I0610 09:22:18.159917    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key: {Name:mk04d776031cd8d2755a757ba7736e35a9c25212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.160037    1637 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key
	I0610 09:22:18.160044    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt with IP's: []
	I0610 09:22:18.246526    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt ...
	I0610 09:22:18.246530    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: {Name:mk301aca75dad20ac385eb683aae1662edff3d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246697    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key ...
	I0610 09:22:18.246700    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key: {Name:mkdf4a2bc618a029a53fbd786e41dffe68b8316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246803    1637 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969
	I0610 09:22:18.246812    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:22:18.411436    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 ...
	I0610 09:22:18.411440    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969: {Name:mk922ab871b245e2b8e7e4b2a109a553fe1bcc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411596    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 ...
	I0610 09:22:18.411599    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969: {Name:mkdde2defc189629d0924fe6871b2adb52e47c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411697    1637 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt
	I0610 09:22:18.411933    1637 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key
	I0610 09:22:18.412033    1637 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key
	I0610 09:22:18.412047    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt with IP's: []
	I0610 09:22:18.578568    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt ...
	I0610 09:22:18.578583    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt: {Name:mkb4544f3ff14d84a98fd9ec92bfcdbb5d50e84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.578783    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key ...
	I0610 09:22:18.578786    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key: {Name:mk82ce3998197ea814bf8f591a5b4b56c617f405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.579030    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:22:18.579468    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:22:18.579491    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:22:18.579672    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:22:18.580285    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:22:18.587660    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:22:18.594728    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:22:18.602219    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:22:18.609690    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:22:18.617442    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:22:18.624297    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:22:18.631049    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:22:18.638070    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:22:18.644969    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:22:18.650094    1637 ssh_runner.go:195] Run: openssl version
	I0610 09:22:18.652167    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:22:18.655090    1637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656540    1637 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656561    1637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.658363    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:22:18.661572    1637 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:22:18.662872    1637 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:22:18.662908    1637 kubeadm.go:404] StartCluster: {Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:22:18.662975    1637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:22:18.668496    1637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:22:18.671389    1637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:22:18.674606    1637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:22:18.677626    1637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:22:18.677644    1637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:22:18.703158    1637 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:22:18.703188    1637 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:22:18.757797    1637 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:22:18.757860    1637 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:22:18.757910    1637 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:22:18.816123    1637 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:22:18.821365    1637 out.go:204]   - Generating certificates and keys ...
	I0610 09:22:18.821409    1637 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:22:18.821441    1637 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:22:19.085233    1637 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:22:19.181413    1637 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:22:19.330348    1637 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:22:19.412707    1637 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:22:19.604000    1637 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:22:19.604069    1637 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.814398    1637 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:22:19.814478    1637 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.907005    1637 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:22:20.056367    1637 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:22:20.125295    1637 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:22:20.125333    1637 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:22:20.241297    1637 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:22:20.330399    1637 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:22:20.489216    1637 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:22:20.764229    1637 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:22:20.771051    1637 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:22:20.771103    1637 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:22:20.771135    1637 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:22:20.859965    1637 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:22:20.864105    1637 out.go:204]   - Booting up control plane ...
	I0610 09:22:20.864178    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:22:20.864224    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:22:20.864257    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:22:20.864302    1637 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:22:20.865267    1637 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:22:24.366796    1637 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501337 seconds
	I0610 09:22:24.366861    1637 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:22:24.372204    1637 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:22:24.898455    1637 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:22:24.898779    1637 kubeadm.go:322] [mark-control-plane] Marking the node addons-098000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:22:25.404043    1637 kubeadm.go:322] [bootstrap-token] Using token: 8xmw5d.kvohdu7dlcpn05ob
	I0610 09:22:25.410608    1637 out.go:204]   - Configuring RBAC rules ...
	I0610 09:22:25.410669    1637 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:22:25.411737    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:22:25.418545    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:22:25.419904    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:22:25.421252    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:22:25.422283    1637 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:22:25.427205    1637 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:22:25.603958    1637 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:22:25.815834    1637 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:22:25.816185    1637 kubeadm.go:322] 
	I0610 09:22:25.816225    1637 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:22:25.816233    1637 kubeadm.go:322] 
	I0610 09:22:25.816291    1637 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:22:25.816295    1637 kubeadm.go:322] 
	I0610 09:22:25.816308    1637 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:22:25.816346    1637 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:22:25.816388    1637 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:22:25.816392    1637 kubeadm.go:322] 
	I0610 09:22:25.816425    1637 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:22:25.816430    1637 kubeadm.go:322] 
	I0610 09:22:25.816463    1637 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:22:25.816466    1637 kubeadm.go:322] 
	I0610 09:22:25.816508    1637 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:22:25.816560    1637 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:22:25.816602    1637 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:22:25.816605    1637 kubeadm.go:322] 
	I0610 09:22:25.816653    1637 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:22:25.816694    1637 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:22:25.816699    1637 kubeadm.go:322] 
	I0610 09:22:25.816749    1637 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.816801    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:22:25.816815    1637 kubeadm.go:322] 	--control-plane 
	I0610 09:22:25.816823    1637 kubeadm.go:322] 
	I0610 09:22:25.816880    1637 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:22:25.816883    1637 kubeadm.go:322] 
	I0610 09:22:25.816931    1637 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.817003    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:22:25.817072    1637 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:22:25.817175    1637 kubeadm.go:322] W0610 16:22:19.219117    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817283    1637 kubeadm.go:322] W0610 16:22:21.323610    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817294    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:25.817303    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:25.823848    1637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:22:25.826928    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:22:25.830443    1637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:22:25.836316    1637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:22:25.836378    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.836393    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-098000 minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.900338    1637 ops.go:34] apiserver oom_adj: -16
	I0610 09:22:25.900382    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.433306    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.933284    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.433115    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.933305    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.433535    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.933493    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.433524    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.932908    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.433563    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.933551    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.433517    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.933506    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.433459    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.933537    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.433223    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.933503    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.432603    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.933481    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.433267    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.933228    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.433253    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.933272    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.433226    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.933202    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.431772    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.933197    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.432078    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.482163    1637 kubeadm.go:1076] duration metric: took 13.645838667s to wait for elevateKubeSystemPrivileges.
	I0610 09:22:39.482178    1637 kubeadm.go:406] StartCluster complete in 20.819301625s
	I0610 09:22:39.482188    1637 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482341    1637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:22:39.482516    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482746    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:22:39.482786    1637 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 09:22:39.482870    1637 addons.go:66] Setting volumesnapshots=true in profile "addons-098000"
	I0610 09:22:39.482872    1637 addons.go:66] Setting inspektor-gadget=true in profile "addons-098000"
	I0610 09:22:39.482879    1637 addons.go:228] Setting addon volumesnapshots=true in "addons-098000"
	I0610 09:22:39.482922    1637 addons.go:66] Setting registry=true in profile "addons-098000"
	I0610 09:22:39.482902    1637 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-098000"
	I0610 09:22:39.482936    1637 addons.go:228] Setting addon registry=true in "addons-098000"
	I0610 09:22:39.482958    1637 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:39.482880    1637 addons.go:228] Setting addon inspektor-gadget=true in "addons-098000"
	I0610 09:22:39.482979    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482984    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482878    1637 addons.go:66] Setting gcp-auth=true in profile "addons-098000"
	I0610 09:22:39.483016    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483020    1637 mustload.go:65] Loading cluster: addons-098000
	I0610 09:22:39.483034    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483276    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.483275    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.482885    1637 addons.go:66] Setting ingress=true in profile "addons-098000"
	I0610 09:22:39.483383    1637 addons.go:228] Setting addon ingress=true in "addons-098000"
	I0610 09:22:39.483423    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.483508    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483523    1637 addons.go:274] "addons-098000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0610 09:22:39.483525    1637 addons.go:464] Verifying addon registry=true in "addons-098000"
	W0610 09:22:39.483511    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483543    1637 addons.go:274] "addons-098000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0610 09:22:39.487787    1637 out.go:177] * Verifying registry addon...
	I0610 09:22:39.482886    1637 addons.go:66] Setting default-storageclass=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting cloud-spanner=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting ingress-dns=true in profile "addons-098000"
	I0610 09:22:39.482892    1637 addons.go:66] Setting storage-provisioner=true in profile "addons-098000"
	I0610 09:22:39.482899    1637 addons.go:66] Setting metrics-server=true in profile "addons-098000"
	W0610 09:22:39.483773    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483867    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.484558    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.494895    1637 addons.go:228] Setting addon ingress-dns=true in "addons-098000"
	I0610 09:22:39.494904    1637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-098000"
	I0610 09:22:39.494907    1637 addons.go:228] Setting addon metrics-server=true in "addons-098000"
	I0610 09:22:39.494911    1637 addons.go:228] Setting addon cloud-spanner=true in "addons-098000"
	I0610 09:22:39.494913    1637 addons.go:228] Setting addon storage-provisioner=true in "addons-098000"
	W0610 09:22:39.494917    1637 addons.go:274] "addons-098000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0610 09:22:39.494920    1637 addons.go:274] "addons-098000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0610 09:22:39.495382    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 09:22:39.500831    1637 addons.go:464] Verifying addon ingress=true in "addons-098000"
	I0610 09:22:39.500842    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500849    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.504830    1637 out.go:177] * Verifying ingress addon...
	I0610 09:22:39.500952    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500997    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 09:22:39.501041    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.501118    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.514859    1637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0610 09:22:39.511954    1637 addons.go:274] "addons-098000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0610 09:22:39.512421    1637 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 09:22:39.517592    1637 addons.go:228] Setting addon default-storageclass=true in "addons-098000"
	I0610 09:22:39.517921    1637 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.518096    1637 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 09:22:39.521871    1637 addons.go:464] Verifying addon metrics-server=true in "addons-098000"
	I0610 09:22:39.527803    1637 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 09:22:39.528897    1637 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 09:22:39.533879    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:22:39.533885    1637 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 09:22:39.533900    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.539950    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 09:22:39.549899    1637 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.549908    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 09:22:39.549915    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540014    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540659    1637 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.550015    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:22:39.550019    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.552885    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 09:22:39.545910    1637 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.547022    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:22:39.555818    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 09:22:39.555836    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.558872    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 09:22:39.563787    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 09:22:39.565032    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 09:22:39.576758    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 09:22:39.585719    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 09:22:39.588857    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 09:22:39.588866    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 09:22:39.588875    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.610676    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.641637    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.644621    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.683769    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.740787    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 09:22:39.740799    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 09:22:39.840307    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 09:22:39.840321    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 09:22:39.985655    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 09:22:39.985667    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 09:22:40.064364    1637 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-098000" context rescaled to 1 replicas
	I0610 09:22:40.064382    1637 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:22:40.068539    1637 out.go:177] * Verifying Kubernetes components...
	I0610 09:22:40.077600    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:40.261757    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 09:22:40.261768    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 09:22:40.290415    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 09:22:40.290425    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 09:22:40.300542    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 09:22:40.300551    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 09:22:40.308642    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 09:22:40.308652    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 09:22:40.313342    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 09:22:40.313353    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 09:22:40.318717    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 09:22:40.318725    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 09:22:40.323460    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.323466    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 09:22:40.335717    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.661069    1637 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105262875s)
	I0610 09:22:40.661101    1637 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:22:40.737190    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.12650025s)
	I0610 09:22:40.873352    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23170025s)
	I0610 09:22:40.873360    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228730125s)
	I0610 09:22:40.873397    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.189617792s)
	I0610 09:22:40.873843    1637 node_ready.go:35] waiting up to 6m0s for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875337    1637 node_ready.go:49] node "addons-098000" has status "Ready":"True"
	I0610 09:22:40.875343    1637 node_ready.go:38] duration metric: took 1.493375ms waiting for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875346    1637 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:40.878632    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881351    1637 pod_ready.go:92] pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:40.881360    1637 pod_ready.go:81] duration metric: took 2.720875ms waiting for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881363    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:41.422744    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.08700475s)
	I0610 09:22:41.422764    1637 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:41.429025    1637 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 09:22:41.436428    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 09:22:41.441210    1637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 09:22:41.441218    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:41.945707    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.446004    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.891987    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:42.949163    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.445226    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.945705    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.445736    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.893909    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:44.949633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.445855    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.945805    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.106349    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 09:22:46.106363    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.140536    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 09:22:46.145624    1637 addons.go:228] Setting addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.145643    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:46.146378    1637 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 09:22:46.146386    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.179928    1637 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 09:22:46.183883    1637 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 09:22:46.187898    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 09:22:46.187903    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 09:22:46.192588    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 09:22:46.192594    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 09:22:46.199251    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.199256    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 09:22:46.204462    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.429785    1637 addons.go:464] Verifying addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.434320    1637 out.go:177] * Verifying gcp-auth addon...
	I0610 09:22:46.440768    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 09:22:46.443515    1637 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 09:22:46.443521    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:46.446140    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949654    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.389319    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:47.445303    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.446055    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:47.946177    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.946875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.446743    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.447103    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.945711    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.946918    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.389715    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:49.445862    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.448994    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.945095    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.945638    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.446626    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.446936    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.887650    1637 pod_ready.go:97] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887663    1637 pod_ready.go:81] duration metric: took 10.00631125s waiting for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	E0610 09:22:50.887668    1637 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887672    1637 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890299    1637 pod_ready.go:92] pod "etcd-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.890307    1637 pod_ready.go:81] duration metric: took 2.63175ms waiting for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890310    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892694    1637 pod_ready.go:92] pod "kube-apiserver-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.892699    1637 pod_ready.go:81] duration metric: took 2.386083ms waiting for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892703    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895043    1637 pod_ready.go:92] pod "kube-controller-manager-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.895049    1637 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895053    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897341    1637 pod_ready.go:92] pod "kube-proxy-jpnqh" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.897346    1637 pod_ready.go:81] duration metric: took 2.29075ms waiting for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897350    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.945358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.946279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.288420    1637 pod_ready.go:92] pod "kube-scheduler-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:51.288430    1637 pod_ready.go:81] duration metric: took 391.078333ms waiting for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:51.288436    1637 pod_ready.go:38] duration metric: took 10.413098792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:51.288445    1637 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:22:51.288516    1637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:22:51.295818    1637 api_server.go:72] duration metric: took 11.231423584s to wait for apiserver process to appear ...
	I0610 09:22:51.295824    1637 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:22:51.295831    1637 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0610 09:22:51.299125    1637 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0610 09:22:51.299826    1637 api_server.go:141] control plane version: v1.27.2
	I0610 09:22:51.299832    1637 api_server.go:131] duration metric: took 4.005625ms to wait for apiserver health ...
	I0610 09:22:51.299835    1637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:22:51.445314    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:51.446212    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.490284    1637 system_pods.go:59] 11 kube-system pods found
	I0610 09:22:51.490295    1637 system_pods.go:61] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.490299    1637 system_pods.go:61] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.490303    1637 system_pods.go:61] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.490306    1637 system_pods.go:61] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.490311    1637 system_pods.go:61] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.490314    1637 system_pods.go:61] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.490317    1637 system_pods.go:61] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.490320    1637 system_pods.go:61] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.490323    1637 system_pods.go:61] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.490325    1637 system_pods.go:61] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.490336    1637 system_pods.go:61] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.490341    1637 system_pods.go:74] duration metric: took 190.503333ms to wait for pod list to return data ...
	I0610 09:22:51.490345    1637 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:22:51.687921    1637 default_sa.go:45] found service account: "default"
	I0610 09:22:51.687931    1637 default_sa.go:55] duration metric: took 197.581625ms for default service account to be created ...
	I0610 09:22:51.687935    1637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:22:51.890310    1637 system_pods.go:86] 11 kube-system pods found
	I0610 09:22:51.890320    1637 system_pods.go:89] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.890326    1637 system_pods.go:89] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.890330    1637 system_pods.go:89] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.890333    1637 system_pods.go:89] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.890336    1637 system_pods.go:89] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.890338    1637 system_pods.go:89] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.890341    1637 system_pods.go:89] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.890344    1637 system_pods.go:89] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.890349    1637 system_pods.go:89] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.890351    1637 system_pods.go:89] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.890354    1637 system_pods.go:89] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.890357    1637 system_pods.go:126] duration metric: took 202.419584ms to wait for k8s-apps to be running ...
	I0610 09:22:51.890363    1637 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:22:51.890418    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:51.897401    1637 system_svc.go:56] duration metric: took 7.035125ms WaitForService to wait for kubelet.
	I0610 09:22:51.897410    1637 kubeadm.go:581] duration metric: took 11.8330175s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:22:51.897420    1637 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:22:51.944537    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.945311    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.087254    1637 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:22:52.087281    1637 node_conditions.go:123] node cpu capacity is 2
	I0610 09:22:52.087290    1637 node_conditions.go:105] duration metric: took 189.867833ms to run NodePressure ...
	I0610 09:22:52.087295    1637 start.go:228] waiting for startup goroutines ...
	I0610 09:22:52.445279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:52.445610    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.945799    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.946052    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.445389    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.446014    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.945473    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.946237    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.446325    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.448382    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.948114    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.951263    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.447181    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.447511    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.945501    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.946418    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.445349    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.445910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.945410    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.946065    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.447469    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.448009    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.945353    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.946520    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454959    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.946148    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.947450    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.446206    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:00.447700    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.944434    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.945129    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.445646    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.446643    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:01.945710    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.947152    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.450730    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.454285    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.952960    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.955376    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.446358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.447878    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.945294    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.946290    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.445145    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.446164    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.946364    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.946514    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.449729    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.453690    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.947873    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.950281    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.445562    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.445795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.946136    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.947509    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.445951    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.446633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.945814    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.946157    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.446086    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.446099    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.970991    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.971383    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.448620    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:09.449087    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.946728    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.948250    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446827    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446978    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:10.945421    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.945732    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.444797    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.445621    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.948926    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.949262    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.452305    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:12.453786    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.948653    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.949795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.445378    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.446558    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:13.946404    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.946644    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.446073    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.446331    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.946569    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.946725    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.445689    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.446865    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.947373    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.948973    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.445756    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:16.446819    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.944171    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.945088    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.448798    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.450089    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:17.952301    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.955532    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.945244    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.946363    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.445300    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:19.445962    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944002    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944781    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.446084    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:20.446223    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.952440    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.954313    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.445625    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.446916    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.945782    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.947236    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.445836    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.446162    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.945365    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.946169    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.449820    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:23.452877    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.953442    1637 kapi.go:107] duration metric: took 37.512712584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 09:23:23.958122    1637 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-098000 cluster.
	I0610 09:23:23.957179    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.961932    1637 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 09:23:23.965925    1637 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 09:23:24.450360    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:24.945980    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.445712    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.946008    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.446034    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.950257    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.454943    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.956882    1637 kapi.go:107] duration metric: took 46.520505042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 09:28:39.510321    1637 kapi.go:107] duration metric: took 6m0.007516916s to wait for kubernetes.io/minikube-addons=registry ...
	W0610 09:28:39.510625    1637 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0610 09:28:39.531369    1637 kapi.go:107] duration metric: took 6m0.011549375s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0610 09:28:39.531491    1637 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0610 09:28:39.539250    1637 out.go:177] * Enabled addons: volumesnapshots, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, gcp-auth, csi-hostpath-driver
	I0610 09:28:39.545184    1637 addons.go:499] enable addons completed in 6m0.055013834s: enabled=[volumesnapshots inspektor-gadget metrics-server cloud-spanner storage-provisioner default-storageclass ingress-dns gcp-auth csi-hostpath-driver]
	I0610 09:28:39.545227    1637 start.go:233] waiting for cluster config update ...
	I0610 09:28:39.545256    1637 start.go:242] writing updated cluster config ...
	I0610 09:28:39.546371    1637 ssh_runner.go:195] Run: rm -f paused
	I0610 09:28:39.689843    1637 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:28:39.694186    1637 out.go:177] 
	W0610 09:28:39.697254    1637 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:28:39.701213    1637 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:28:39.709228    1637 out.go:177] * Done! kubectl is now configured to use "addons-098000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:48:41 UTC. --
	Jun 10 16:40:47 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:40:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b1c09a27c65e6025a8628d242894806ef3c84888ee3461978597ea82c9a8359/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:40:47 addons-098000 dockerd[933]: time="2023-06-10T16:40:47.656476255Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 10 16:40:51 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:40:51Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626101618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626131368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626141868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626146368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637249959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637496039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637517955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.646608437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:41:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ecfebe56570bafdd4953fcbed5cc491440e7a38433b98e22741f399c50daace4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:41:18 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:41:18Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961496642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961525017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961721889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961733472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713886345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713965761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713982010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713993260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[933]: time="2023-06-10T16:44:06.762500900Z" level=info msg="ignoring event" container=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762943477Z" level=info msg="shim disconnected" id=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f namespace=moby
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762974435Z" level=warning msg="cleaning up after shim disconnected" id=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f namespace=moby
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762978435Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	d8c42421d5531       1499ed4fbd0aa                                                                                                                                4 minutes ago       Exited              minikube-ingress-dns                     9                   8e5b404496c4e
	5db3f4d3cbb1e       nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305                                                                7 minutes ago       Running             task-pv-container                        0                   ecfebe56570ba
	efcb07dff66ed       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                                        7 minutes ago       Running             headlamp                                 0                   4b1c09a27c65e
	23a8cae6443cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          25 minutes ago      Running             csi-snapshotter                          0                   567c041b8040d
	1a73024f59864       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 25 minutes ago      Running             gcp-auth                                 0                   d8f3043938a40
	3fa8701fda26c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          25 minutes ago      Running             csi-provisioner                          0                   567c041b8040d
	aafd1d61dfe4b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            25 minutes ago      Running             liveness-probe                           0                   567c041b8040d
	2b6767dfbe9d3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           25 minutes ago      Running             hostpath                                 0                   567c041b8040d
	8f02984364568       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                25 minutes ago      Running             node-driver-registrar                    0                   567c041b8040d
	868cfa9fcba69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   25 minutes ago      Running             csi-external-health-monitor-controller   0                   567c041b8040d
	26cfafca2bb0d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              25 minutes ago      Running             csi-resizer                              0                   a78a427783820
	c58c2d26acda8       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             25 minutes ago      Running             csi-attacher                             0                   674b1cd12ae30
	46105da82f67a       ba04bb24b9575                                                                                                                                26 minutes ago      Running             storage-provisioner                      0                   67c7765a9fa6e
	de0a71571f8d0       29921a0845422                                                                                                                                26 minutes ago      Running             kube-proxy                               0                   2bc9129027615
	adfb52103967f       97e04611ad434                                                                                                                                26 minutes ago      Running             coredns                                  0                   d428f978de558
	335475d795fcf       305d7ed1dae28                                                                                                                                26 minutes ago      Running             kube-scheduler                           0                   31fdcf4abeef0
	3dcf946c301ce       2ee705380c3c5                                                                                                                                26 minutes ago      Running             kube-controller-manager                  0                   9fed8ca4bd2f8
	74423d2dab41d       72c9df6be7f1b                                                                                                                                26 minutes ago      Running             kube-apiserver                           0                   11d78b6999216
	2a81bf4413e12       24bc64e911039                                                                                                                                26 minutes ago      Running             etcd                                     0                   a20e51a803c8c
	
	* 
	* ==> coredns [adfb52103967] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46766 - 38334 "HINFO IN 1120296007274907072.5268654669647465865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004199511s
	[INFO] 10.244.0.10:39208 - 36576 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125s
	[INFO] 10.244.0.10:59425 - 64759 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155334s
	[INFO] 10.244.0.10:33915 - 19077 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037167s
	[INFO] 10.244.0.10:46994 - 65166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002725s
	[INFO] 10.244.0.10:46598 - 37414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043625s
	[INFO] 10.244.0.10:55204 - 18019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000032792s
	[INFO] 10.244.0.10:60613 - 7185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000939127s
	[INFO] 10.244.0.10:40293 - 55849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00103996s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-098000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-098000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-098000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:48:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 43359b33bc0f4b9c9610dd4ec5308f62
	  System UUID:                43359b33bc0f4b9c9610dd4ec5308f62
	  Boot ID:                    eb81fa5c-fe8f-47ab-b5e5-9f5fe2e987b0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  gcp-auth                    gcp-auth-58478865f7-jkcxn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  headlamp                    headlamp-6b5756787-6wqrt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 coredns-5d78c9869d-f2tnn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     26m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 csi-hostpathplugin-pjvh6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-addons-098000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kube-apiserver-addons-098000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-addons-098000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-jpnqh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-addons-098000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 26m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m   kubelet          Node addons-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m   kubelet          Node addons-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m   kubelet          Node addons-098000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26m   kubelet          Node addons-098000 status is now: NodeReady
	  Normal  RegisteredNode           26m   node-controller  Node addons-098000 event: Registered Node addons-098000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.696014] EINJ: EINJ table not found.
	[  +0.658239] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043798] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000807] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.876165] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.071972] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.924516] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +2.288987] systemd-fstab-generator[866]: Ignoring "noauto" for root device
	[  +0.165983] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +0.077870] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.072149] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +1.146266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099605] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
	[  +0.082038] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.080513] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.078963] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.086582] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	[  +3.056689] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.651414] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[ +14.757696] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.157496] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.873848] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 16:23] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 16:41] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [2a81bf4413e1] <==
	* {"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-098000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:32:22.450Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":974,"took":"2.490131ms","hash":4035340276}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035340276,"revision":974,"compact-revision":-1}
	{"level":"info","ts":"2023-06-10T16:37:22.461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1290}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1290,"took":"1.421443ms","hash":2326989487}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2326989487,"revision":1290,"compact-revision":974}
	{"level":"info","ts":"2023-06-10T16:40:51.234Z","caller":"traceutil/trace.go:171","msg":"trace[700287871] transaction","detail":"{read_only:false; response_revision:1833; number_of_response:1; }","duration":"135.849795ms","start":"2023-06-10T16:40:51.098Z","end":"2023-06-10T16:40:51.234Z","steps":["trace[700287871] 'process raft request'  (duration: 135.764879ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T16:42:22.466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1593}
	{"level":"info","ts":"2023-06-10T16:42:22.468Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1593,"took":"1.224024ms","hash":3268633527}
	{"level":"info","ts":"2023-06-10T16:42:22.468Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3268633527,"revision":1593,"compact-revision":1290}
	{"level":"info","ts":"2023-06-10T16:47:22.473Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1959}
	{"level":"info","ts":"2023-06-10T16:47:22.475Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1959,"took":"1.126903ms","hash":690256152}
	{"level":"info","ts":"2023-06-10T16:47:22.475Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":690256152,"revision":1959,"compact-revision":1593}
	
	* 
	* ==> gcp-auth [1a73024f5986] <==
	* 2023/06/10 16:23:23 GCP Auth Webhook started!
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:41:14 Ready to marshal response ...
	2023/06/10 16:41:14 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  16:48:41 up 26 min,  0 users,  load average: 0.39, 0.48, 0.41
	Linux addons-098000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74423d2dab41] <==
	* I0610 16:22:23.642356       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:22:23.657792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:22:24.401560       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:22:24.563279       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:22:24.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:22:24.568654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:22:24.720978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:22:24.731371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:22:24.801810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:22:24.805350       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0610 16:22:24.806303       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:22:24.807740       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:22:25.583035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:22:26.059225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:22:26.063878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:22:26.068513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:22:39.217505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:22:39.917252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:22:40.754199       1 alloc.go:330] "allocated clusterIPs" service="default/cloud-spanner-emulator" clusterIPs=map[IPv4:10.99.222.169]
	I0610 16:22:41.357691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.106.85.14]
	I0610 16:22:41.362266       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0610 16:22:41.419673       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.111.90.60]
	I0610 16:22:46.394399       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.102.46.8]
	I0610 16:22:46.411449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0610 16:40:46.897391       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.104.126.229]
	
	* 
	* ==> kube-controller-manager [3dcf946c301c] <==
	* I0610 16:23:11.258467       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.330708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.333357       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.335850       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:11.335887       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.336870       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.345682       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.251101       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.256393       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.263577       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:12.263691       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.265671       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.266556       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:41.027747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:41.050836       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:42.013412       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:42.047992       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:40:46.909946       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-6b5756787 to 1"
	I0610 16:40:46.923328       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-6b5756787-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0610 16:40:46.939753       1 replica_set.go:544] sync "headlamp/headlamp-6b5756787" failed with pods "headlamp-6b5756787-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0610 16:40:46.962022       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-6b5756787-6wqrt"
	I0610 16:40:58.095565       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:40:58.095582       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:41:08.457597       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:41:13.738721       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [de0a71571f8d] <==
	* I0610 16:22:40.477801       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0610 16:22:40.477968       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0610 16:22:40.477988       1 server_others.go:551] "Using iptables proxy"
	I0610 16:22:40.508315       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:22:40.508325       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:22:40.508357       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:22:40.508608       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:22:40.508614       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:22:40.509861       1 config.go:188] "Starting service config controller"
	I0610 16:22:40.509869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:22:40.509881       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:22:40.509882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:22:40.511342       1 config.go:315] "Starting node config controller"
	I0610 16:22:40.511347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:22:40.609918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:22:40.609943       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:22:40.611397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [335475d795fc] <==
	* W0610 16:22:23.606482       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:22:23.606891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:22:23.606959       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:22:23.606982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:22:23.607008       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607026       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607067       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:23.607087       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:23.607166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:23.607268       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.463642       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:22:24.463731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:22:24.485768       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:22:24.485809       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:22:24.588161       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:24.588197       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:24.600064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:24.600158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.604631       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:22:24.604651       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:22:24.616055       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:22:24.616131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:22:27.098734       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:48:41 UTC. --
	Jun 10 16:47:14 addons-098000 kubelet[2091]: I0610 16:47:14.680334    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:47:14 addons-098000 kubelet[2091]: E0610 16:47:14.680480    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:47:25 addons-098000 kubelet[2091]: E0610 16:47:25.684874    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:47:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:47:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:47:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:47:25 addons-098000 kubelet[2091]: W0610 16:47:25.694341    2091 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 10 16:47:27 addons-098000 kubelet[2091]: I0610 16:47:27.681562    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:47:27 addons-098000 kubelet[2091]: E0610 16:47:27.682569    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:47:40 addons-098000 kubelet[2091]: I0610 16:47:40.681775    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:47:40 addons-098000 kubelet[2091]: E0610 16:47:40.682218    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:47:52 addons-098000 kubelet[2091]: I0610 16:47:52.681264    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:47:52 addons-098000 kubelet[2091]: E0610 16:47:52.682890    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:48:05 addons-098000 kubelet[2091]: I0610 16:48:05.681352    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:48:05 addons-098000 kubelet[2091]: E0610 16:48:05.682383    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:48:16 addons-098000 kubelet[2091]: I0610 16:48:16.681442    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:48:16 addons-098000 kubelet[2091]: E0610 16:48:16.681962    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:48:25 addons-098000 kubelet[2091]: E0610 16:48:25.703434    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:48:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:48:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:48:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:48:27 addons-098000 kubelet[2091]: I0610 16:48:27.681748    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:48:27 addons-098000 kubelet[2091]: E0610 16:48:27.684129    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:48:38 addons-098000 kubelet[2091]: I0610 16:48:38.681833    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:48:38 addons-098000 kubelet[2091]: E0610 16:48:38.682810    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	
	* 
	* ==> storage-provisioner [46105da82f67] <==
	* I0610 16:22:41.552997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:22:41.564566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:22:41.564604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:22:41.567070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:22:41.567242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b8b8b2f-e69f-4abd-8693-9c0a331852aa", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-098000_976d826c-217e-4d0d-87e7-e825dd783783 became leader
	I0610 16:22:41.567336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	I0610 16:22:41.668274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (720.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.002261917s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-06-10 09:40:45.073488 -0700 PDT m=+1176.065291626
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-098000 -n addons-098000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-098000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | --download-only -p             | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | binary-mirror-025000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000        | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | -p addons-098000               | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:28 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:28 PDT | 10 Jun 23 09:28 PDT |
	|         | addons-098000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:54.764352    1637 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:54.764757    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764761    1637 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:54.764764    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764861    1637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:21:54.766294    1637 out.go:303] Setting JSON to false
	I0610 09:21:54.781540    1637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1285,"bootTime":1686412829,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:54.781615    1637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:54.786460    1637 out.go:177] * [addons-098000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:54.793542    1637 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:21:54.798440    1637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:54.793561    1637 notify.go:220] Checking for updates...
	I0610 09:21:54.804413    1637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:54.807450    1637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:54.810460    1637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:21:54.811765    1637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:21:54.814627    1637 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:54.818412    1637 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:21:54.823426    1637 start.go:297] selected driver: qemu2
	I0610 09:21:54.823432    1637 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:54.823441    1637 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:21:54.825256    1637 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:54.828578    1637 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:21:54.831535    1637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:21:54.831554    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:21:54.831575    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:54.831579    1637 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:21:54.831586    1637 start_flags.go:319] config:
	{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:54.831700    1637 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:54.840445    1637 out.go:177] * Starting control plane node addons-098000 in cluster addons-098000
	I0610 09:21:54.844425    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:54.844451    1637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:54.844469    1637 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:54.844530    1637 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:21:54.844535    1637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:54.844735    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:21:54.844750    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json: {Name:mkfbe060a3258f68fbe8b01ce26e4a7ada2f24f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:54.844947    1637 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:21:54.844969    1637 start.go:364] acquiring machines lock for addons-098000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:21:54.845063    1637 start.go:368] acquired machines lock for "addons-098000" in 89.292µs
	I0610 09:21:54.845075    1637 start.go:93] Provisioning new machine with config: &{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:21:54.845115    1637 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:21:54.853376    1637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:21:55.217388    1637 start.go:159] libmachine.API.Create for "addons-098000" (driver="qemu2")
	I0610 09:21:55.217427    1637 client.go:168] LocalClient.Create starting
	I0610 09:21:55.217549    1637 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:21:55.301145    1637 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:21:55.414002    1637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:21:55.826273    1637 main.go:141] libmachine: Creating SSH key...
	I0610 09:21:55.859428    1637 main.go:141] libmachine: Creating Disk image...
	I0610 09:21:55.859434    1637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:21:55.859612    1637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.941560    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:55.941581    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.941655    1637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2 +20000M
	I0610 09:21:55.948999    1637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:21:55.949013    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.949042    1637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.949049    1637 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:21:55.949080    1637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e2:60:7a:4e:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:56.034280    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:56.034334    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:56.034338    1637 main.go:141] libmachine: Attempt 0
	I0610 09:21:56.034355    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:21:58.036587    1637 main.go:141] libmachine: Attempt 1
	I0610 09:21:58.036664    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:00.038868    1637 main.go:141] libmachine: Attempt 2
	I0610 09:22:00.038909    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:02.040980    1637 main.go:141] libmachine: Attempt 3
	I0610 09:22:02.040996    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:04.043076    1637 main.go:141] libmachine: Attempt 4
	I0610 09:22:04.043113    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:06.045175    1637 main.go:141] libmachine: Attempt 5
	I0610 09:22:06.045200    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047388    1637 main.go:141] libmachine: Attempt 6
	I0610 09:22:08.047472    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047875    1637 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 09:22:08.047987    1637 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6485f4af}
	I0610 09:22:08.048012    1637 main.go:141] libmachine: Found match: c2:e2:60:7a:4e:46
	I0610 09:22:08.048053    1637 main.go:141] libmachine: IP: 192.168.105.2
	I0610 09:22:08.048083    1637 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 09:22:10.069705    1637 machine.go:88] provisioning docker machine ...
	I0610 09:22:10.069788    1637 buildroot.go:166] provisioning hostname "addons-098000"
	I0610 09:22:10.070644    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.071570    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.071588    1637 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-098000 && echo "addons-098000" | sudo tee /etc/hostname
	I0610 09:22:10.164038    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-098000
	
	I0610 09:22:10.164160    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.164626    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.164641    1637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:22:10.239261    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:22:10.239281    1637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:22:10.239300    1637 buildroot.go:174] setting up certificates
	I0610 09:22:10.239307    1637 provision.go:83] configureAuth start
	I0610 09:22:10.239314    1637 provision.go:138] copyHostCerts
	I0610 09:22:10.239507    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:22:10.240632    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:22:10.241010    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:22:10.241260    1637 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.addons-098000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-098000]
	I0610 09:22:10.307069    1637 provision.go:172] copyRemoteCerts
	I0610 09:22:10.307140    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:22:10.307172    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.339991    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:22:10.346931    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 09:22:10.353742    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:22:10.360626    1637 provision.go:86] duration metric: configureAuth took 121.313416ms
	I0610 09:22:10.360639    1637 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:22:10.361002    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:10.361055    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.361272    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.361276    1637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:22:10.420194    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:22:10.420201    1637 buildroot.go:70] root file system type: tmpfs
	I0610 09:22:10.420251    1637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:22:10.420295    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.420542    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.420577    1637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:22:10.485025    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:22:10.485070    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.485298    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.485310    1637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:22:10.830569    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:22:10.830580    1637 machine.go:91] provisioned docker machine in 760.843209ms
	I0610 09:22:10.830585    1637 client.go:171] LocalClient.Create took 15.613176541s
	I0610 09:22:10.830594    1637 start.go:167] duration metric: libmachine.API.Create for "addons-098000" took 15.613236583s
	I0610 09:22:10.830598    1637 start.go:300] post-start starting for "addons-098000" (driver="qemu2")
	I0610 09:22:10.830601    1637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:22:10.830682    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:22:10.830692    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.862119    1637 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:22:10.863469    1637 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:22:10.863478    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:22:10.863540    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:22:10.863565    1637 start.go:303] post-start completed in 32.963459ms
	I0610 09:22:10.863901    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:22:10.864045    1637 start.go:128] duration metric: createHost completed in 16.018950083s
	I0610 09:22:10.864069    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.864287    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.864291    1637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:22:10.923434    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686414131.384712585
	
	I0610 09:22:10.923441    1637 fix.go:207] guest clock: 1686414131.384712585
	I0610 09:22:10.923446    1637 fix.go:220] Guest: 2023-06-10 09:22:11.384712585 -0700 PDT Remote: 2023-06-10 09:22:10.864048 -0700 PDT m=+16.118188126 (delta=520.664585ms)
	I0610 09:22:10.923456    1637 fix.go:191] guest clock delta is within tolerance: 520.664585ms
	I0610 09:22:10.923459    1637 start.go:83] releasing machines lock for "addons-098000", held for 16.0784145s
	I0610 09:22:10.923756    1637 ssh_runner.go:195] Run: cat /version.json
	I0610 09:22:10.923765    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.923833    1637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:22:10.923872    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:11.040251    1637 ssh_runner.go:195] Run: systemctl --version
	I0610 09:22:11.042905    1637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:22:11.045415    1637 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:22:11.045461    1637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:22:11.051643    1637 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:22:11.051653    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:11.051736    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:11.061365    1637 docker.go:633] Got preloaded images: 
	I0610 09:22:11.061374    1637 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:22:11.061418    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:11.064624    1637 ssh_runner.go:195] Run: which lz4
	I0610 09:22:11.066056    1637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:22:11.067511    1637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:22:11.067524    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:22:12.384653    1637 docker.go:597] Took 1.318649 seconds to copy over tarball
	I0610 09:22:12.384711    1637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:22:13.518722    1637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.133975834s)
	I0610 09:22:13.518746    1637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:22:13.534141    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:13.537423    1637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:22:13.542380    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:13.617910    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:15.783768    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165840375s)
	I0610 09:22:15.783797    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.783942    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.789136    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:22:15.792061    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:22:15.794990    1637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:22:15.795014    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:22:15.798511    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.801745    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:22:15.804884    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.807635    1637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:22:15.810661    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:22:15.814158    1637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:22:15.817306    1637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:22:15.819948    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:15.905204    1637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:22:15.910905    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.910988    1637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:22:15.916986    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.922219    1637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:22:15.929205    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.933866    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.938677    1637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:22:15.974269    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.979243    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.984512    1637 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:22:15.985792    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:22:15.988369    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:22:15.993006    1637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:22:16.073036    1637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:22:16.147707    1637 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:22:16.147726    1637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:22:16.152764    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:16.219604    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:17.389947    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170326875s)
	I0610 09:22:17.390012    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.468450    1637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:22:17.548751    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.629562    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.707590    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:22:17.714930    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.794794    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:22:17.819341    1637 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:22:17.819427    1637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:22:17.821557    1637 start.go:549] Will wait 60s for crictl version
	I0610 09:22:17.821591    1637 ssh_runner.go:195] Run: which crictl
	I0610 09:22:17.825207    1637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:22:17.842430    1637 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:22:17.842501    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.850299    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.866701    1637 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:22:17.866866    1637 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:22:17.868327    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.871885    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:17.871927    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.877489    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.877499    1637 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:22:17.877550    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.883143    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.883157    1637 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:22:17.883198    1637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:22:17.890410    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:17.890420    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:17.890445    1637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:22:17.890455    1637 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-098000 NodeName:addons-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:22:17.890526    1637 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-098000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:22:17.890573    1637 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:22:17.890631    1637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:22:17.893850    1637 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:22:17.893880    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:22:17.896724    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0610 09:22:17.901642    1637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:22:17.906483    1637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0610 09:22:17.911373    1637 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0610 09:22:17.912694    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.916067    1637 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000 for IP: 192.168.105.2
	I0610 09:22:17.916076    1637 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:17.916236    1637 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:22:18.022564    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt ...
	I0610 09:22:18.022569    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt: {Name:mk821d9de36f93438ad430683cb25e2f1c33c9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022803    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key ...
	I0610 09:22:18.022806    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key: {Name:mk750eea32c0b02b6ad84d81711cbfd77ceefe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022913    1637 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:22:18.159699    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt ...
	I0610 09:22:18.159708    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt: {Name:mk10e39bee2c5c6785228bc7733548a740243d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.159914    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key ...
	I0610 09:22:18.159917    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key: {Name:mk04d776031cd8d2755a757ba7736e35a9c25212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.160037    1637 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key
	I0610 09:22:18.160044    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt with IP's: []
	I0610 09:22:18.246526    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt ...
	I0610 09:22:18.246530    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: {Name:mk301aca75dad20ac385eb683aae1662edff3d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246697    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key ...
	I0610 09:22:18.246700    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key: {Name:mkdf4a2bc618a029a53fbd786e41dffe68b8316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246803    1637 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969
	I0610 09:22:18.246812    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:22:18.411436    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 ...
	I0610 09:22:18.411440    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969: {Name:mk922ab871b245e2b8e7e4b2a109a553fe1bcc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411596    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 ...
	I0610 09:22:18.411599    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969: {Name:mkdde2defc189629d0924fe6871b2adb52e47c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411697    1637 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt
	I0610 09:22:18.411933    1637 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key
	I0610 09:22:18.412033    1637 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key
	I0610 09:22:18.412047    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt with IP's: []
	I0610 09:22:18.578568    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt ...
	I0610 09:22:18.578583    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt: {Name:mkb4544f3ff14d84a98fd9ec92bfcdbb5d50e84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.578783    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key ...
	I0610 09:22:18.578786    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key: {Name:mk82ce3998197ea814bf8f591a5b4b56c617f405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.579030    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:22:18.579468    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:22:18.579491    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:22:18.579672    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:22:18.580285    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:22:18.587660    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:22:18.594728    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:22:18.602219    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:22:18.609690    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:22:18.617442    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:22:18.624297    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:22:18.631049    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:22:18.638070    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:22:18.644969    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:22:18.650094    1637 ssh_runner.go:195] Run: openssl version
	I0610 09:22:18.652167    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:22:18.655090    1637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656540    1637 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656561    1637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.658363    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:22:18.661572    1637 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:22:18.662872    1637 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:22:18.662908    1637 kubeadm.go:404] StartCluster: {Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:22:18.662975    1637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:22:18.668496    1637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:22:18.671389    1637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:22:18.674606    1637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:22:18.677626    1637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:22:18.677644    1637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:22:18.703158    1637 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:22:18.703188    1637 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:22:18.757797    1637 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:22:18.757860    1637 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:22:18.757910    1637 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:22:18.816123    1637 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:22:18.821365    1637 out.go:204]   - Generating certificates and keys ...
	I0610 09:22:18.821409    1637 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:22:18.821441    1637 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:22:19.085233    1637 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:22:19.181413    1637 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:22:19.330348    1637 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:22:19.412707    1637 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:22:19.604000    1637 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:22:19.604069    1637 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.814398    1637 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:22:19.814478    1637 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.907005    1637 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:22:20.056367    1637 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:22:20.125295    1637 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:22:20.125333    1637 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:22:20.241297    1637 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:22:20.330399    1637 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:22:20.489216    1637 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:22:20.764229    1637 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:22:20.771051    1637 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:22:20.771103    1637 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:22:20.771135    1637 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:22:20.859965    1637 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:22:20.864105    1637 out.go:204]   - Booting up control plane ...
	I0610 09:22:20.864178    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:22:20.864224    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:22:20.864257    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:22:20.864302    1637 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:22:20.865267    1637 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:22:24.366796    1637 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501337 seconds
	I0610 09:22:24.366861    1637 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:22:24.372204    1637 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:22:24.898455    1637 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:22:24.898779    1637 kubeadm.go:322] [mark-control-plane] Marking the node addons-098000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:22:25.404043    1637 kubeadm.go:322] [bootstrap-token] Using token: 8xmw5d.kvohdu7dlcpn05ob
	I0610 09:22:25.410608    1637 out.go:204]   - Configuring RBAC rules ...
	I0610 09:22:25.410669    1637 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:22:25.411737    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:22:25.418545    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:22:25.419904    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:22:25.421252    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:22:25.422283    1637 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:22:25.427205    1637 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:22:25.603958    1637 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:22:25.815834    1637 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:22:25.816185    1637 kubeadm.go:322] 
	I0610 09:22:25.816225    1637 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:22:25.816233    1637 kubeadm.go:322] 
	I0610 09:22:25.816291    1637 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:22:25.816295    1637 kubeadm.go:322] 
	I0610 09:22:25.816308    1637 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:22:25.816346    1637 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:22:25.816388    1637 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:22:25.816392    1637 kubeadm.go:322] 
	I0610 09:22:25.816425    1637 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:22:25.816430    1637 kubeadm.go:322] 
	I0610 09:22:25.816463    1637 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:22:25.816466    1637 kubeadm.go:322] 
	I0610 09:22:25.816508    1637 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:22:25.816560    1637 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:22:25.816602    1637 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:22:25.816605    1637 kubeadm.go:322] 
	I0610 09:22:25.816653    1637 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:22:25.816694    1637 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:22:25.816699    1637 kubeadm.go:322] 
	I0610 09:22:25.816749    1637 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.816801    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:22:25.816815    1637 kubeadm.go:322] 	--control-plane 
	I0610 09:22:25.816823    1637 kubeadm.go:322] 
	I0610 09:22:25.816880    1637 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:22:25.816883    1637 kubeadm.go:322] 
	I0610 09:22:25.816931    1637 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.817003    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:22:25.817072    1637 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:22:25.817175    1637 kubeadm.go:322] W0610 16:22:19.219117    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817283    1637 kubeadm.go:322] W0610 16:22:21.323610    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817294    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:25.817303    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:25.823848    1637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:22:25.826928    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:22:25.830443    1637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:22:25.836316    1637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:22:25.836378    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.836393    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-098000 minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.900338    1637 ops.go:34] apiserver oom_adj: -16
	I0610 09:22:25.900382    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.433306    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.933284    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.433115    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.933305    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.433535    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.933493    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.433524    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.932908    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.433563    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.933551    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.433517    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.933506    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.433459    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.933537    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.433223    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.933503    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.432603    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.933481    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.433267    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.933228    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.433253    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.933272    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.433226    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.933202    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.431772    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.933197    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.432078    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.482163    1637 kubeadm.go:1076] duration metric: took 13.645838667s to wait for elevateKubeSystemPrivileges.
	I0610 09:22:39.482178    1637 kubeadm.go:406] StartCluster complete in 20.819301625s
	I0610 09:22:39.482188    1637 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482341    1637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:22:39.482516    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482746    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:22:39.482786    1637 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 09:22:39.482870    1637 addons.go:66] Setting volumesnapshots=true in profile "addons-098000"
	I0610 09:22:39.482872    1637 addons.go:66] Setting inspektor-gadget=true in profile "addons-098000"
	I0610 09:22:39.482879    1637 addons.go:228] Setting addon volumesnapshots=true in "addons-098000"
	I0610 09:22:39.482922    1637 addons.go:66] Setting registry=true in profile "addons-098000"
	I0610 09:22:39.482902    1637 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-098000"
	I0610 09:22:39.482936    1637 addons.go:228] Setting addon registry=true in "addons-098000"
	I0610 09:22:39.482958    1637 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:39.482880    1637 addons.go:228] Setting addon inspektor-gadget=true in "addons-098000"
	I0610 09:22:39.482979    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482984    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482878    1637 addons.go:66] Setting gcp-auth=true in profile "addons-098000"
	I0610 09:22:39.483016    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483020    1637 mustload.go:65] Loading cluster: addons-098000
	I0610 09:22:39.483034    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483276    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.483275    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.482885    1637 addons.go:66] Setting ingress=true in profile "addons-098000"
	I0610 09:22:39.483383    1637 addons.go:228] Setting addon ingress=true in "addons-098000"
	I0610 09:22:39.483423    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.483508    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483523    1637 addons.go:274] "addons-098000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0610 09:22:39.483525    1637 addons.go:464] Verifying addon registry=true in "addons-098000"
	W0610 09:22:39.483511    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483543    1637 addons.go:274] "addons-098000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0610 09:22:39.487787    1637 out.go:177] * Verifying registry addon...
	I0610 09:22:39.482886    1637 addons.go:66] Setting default-storageclass=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting cloud-spanner=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting ingress-dns=true in profile "addons-098000"
	I0610 09:22:39.482892    1637 addons.go:66] Setting storage-provisioner=true in profile "addons-098000"
	I0610 09:22:39.482899    1637 addons.go:66] Setting metrics-server=true in profile "addons-098000"
	W0610 09:22:39.483773    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483867    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.484558    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.494895    1637 addons.go:228] Setting addon ingress-dns=true in "addons-098000"
	I0610 09:22:39.494904    1637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-098000"
	I0610 09:22:39.494907    1637 addons.go:228] Setting addon metrics-server=true in "addons-098000"
	I0610 09:22:39.494911    1637 addons.go:228] Setting addon cloud-spanner=true in "addons-098000"
	I0610 09:22:39.494913    1637 addons.go:228] Setting addon storage-provisioner=true in "addons-098000"
	W0610 09:22:39.494917    1637 addons.go:274] "addons-098000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0610 09:22:39.494920    1637 addons.go:274] "addons-098000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0610 09:22:39.495382    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 09:22:39.500831    1637 addons.go:464] Verifying addon ingress=true in "addons-098000"
	I0610 09:22:39.500842    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500849    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.504830    1637 out.go:177] * Verifying ingress addon...
	I0610 09:22:39.500952    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500997    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 09:22:39.501041    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.501118    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.514859    1637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0610 09:22:39.511954    1637 addons.go:274] "addons-098000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0610 09:22:39.512421    1637 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 09:22:39.517592    1637 addons.go:228] Setting addon default-storageclass=true in "addons-098000"
	I0610 09:22:39.517921    1637 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.518096    1637 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 09:22:39.521871    1637 addons.go:464] Verifying addon metrics-server=true in "addons-098000"
	I0610 09:22:39.527803    1637 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 09:22:39.528897    1637 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 09:22:39.533879    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:22:39.533885    1637 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 09:22:39.533900    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.539950    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 09:22:39.549899    1637 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.549908    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 09:22:39.549915    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540014    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540659    1637 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.550015    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:22:39.550019    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.552885    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 09:22:39.545910    1637 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.547022    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:22:39.555818    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 09:22:39.555836    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.558872    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 09:22:39.563787    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 09:22:39.565032    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 09:22:39.576758    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 09:22:39.585719    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 09:22:39.588857    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 09:22:39.588866    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 09:22:39.588875    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.610676    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.641637    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.644621    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.683769    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.740787    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 09:22:39.740799    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 09:22:39.840307    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 09:22:39.840321    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 09:22:39.985655    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 09:22:39.985667    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 09:22:40.064364    1637 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-098000" context rescaled to 1 replicas
	I0610 09:22:40.064382    1637 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:22:40.068539    1637 out.go:177] * Verifying Kubernetes components...
	I0610 09:22:40.077600    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:40.261757    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 09:22:40.261768    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 09:22:40.290415    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 09:22:40.290425    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 09:22:40.300542    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 09:22:40.300551    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 09:22:40.308642    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 09:22:40.308652    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 09:22:40.313342    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 09:22:40.313353    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 09:22:40.318717    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 09:22:40.318725    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 09:22:40.323460    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.323466    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 09:22:40.335717    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.661069    1637 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105262875s)
	I0610 09:22:40.661101    1637 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:22:40.737190    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.12650025s)
	I0610 09:22:40.873352    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23170025s)
	I0610 09:22:40.873360    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228730125s)
	I0610 09:22:40.873397    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.189617792s)
	I0610 09:22:40.873843    1637 node_ready.go:35] waiting up to 6m0s for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875337    1637 node_ready.go:49] node "addons-098000" has status "Ready":"True"
	I0610 09:22:40.875343    1637 node_ready.go:38] duration metric: took 1.493375ms waiting for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875346    1637 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:40.878632    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881351    1637 pod_ready.go:92] pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:40.881360    1637 pod_ready.go:81] duration metric: took 2.720875ms waiting for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881363    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:41.422744    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.08700475s)
	I0610 09:22:41.422764    1637 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:41.429025    1637 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 09:22:41.436428    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 09:22:41.441210    1637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 09:22:41.441218    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:41.945707    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.446004    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.891987    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:42.949163    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.445226    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.945705    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.445736    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.893909    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:44.949633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.445855    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.945805    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.106349    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 09:22:46.106363    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.140536    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 09:22:46.145624    1637 addons.go:228] Setting addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.145643    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:46.146378    1637 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 09:22:46.146386    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.179928    1637 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 09:22:46.183883    1637 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 09:22:46.187898    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 09:22:46.187903    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 09:22:46.192588    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 09:22:46.192594    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 09:22:46.199251    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.199256    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 09:22:46.204462    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.429785    1637 addons.go:464] Verifying addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.434320    1637 out.go:177] * Verifying gcp-auth addon...
	I0610 09:22:46.440768    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 09:22:46.443515    1637 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 09:22:46.443521    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:46.446140    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949654    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.389319    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:47.445303    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.446055    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:47.946177    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.946875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.446743    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.447103    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.945711    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.946918    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.389715    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:49.445862    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.448994    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.945095    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.945638    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.446626    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.446936    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.887650    1637 pod_ready.go:97] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887663    1637 pod_ready.go:81] duration metric: took 10.00631125s waiting for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	E0610 09:22:50.887668    1637 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887672    1637 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890299    1637 pod_ready.go:92] pod "etcd-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.890307    1637 pod_ready.go:81] duration metric: took 2.63175ms waiting for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890310    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892694    1637 pod_ready.go:92] pod "kube-apiserver-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.892699    1637 pod_ready.go:81] duration metric: took 2.386083ms waiting for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892703    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895043    1637 pod_ready.go:92] pod "kube-controller-manager-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.895049    1637 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895053    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897341    1637 pod_ready.go:92] pod "kube-proxy-jpnqh" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.897346    1637 pod_ready.go:81] duration metric: took 2.29075ms waiting for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897350    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.945358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.946279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.288420    1637 pod_ready.go:92] pod "kube-scheduler-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:51.288430    1637 pod_ready.go:81] duration metric: took 391.078333ms waiting for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:51.288436    1637 pod_ready.go:38] duration metric: took 10.413098792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:51.288445    1637 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:22:51.288516    1637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:22:51.295818    1637 api_server.go:72] duration metric: took 11.231423584s to wait for apiserver process to appear ...
	I0610 09:22:51.295824    1637 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:22:51.295831    1637 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0610 09:22:51.299125    1637 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0610 09:22:51.299826    1637 api_server.go:141] control plane version: v1.27.2
	I0610 09:22:51.299832    1637 api_server.go:131] duration metric: took 4.005625ms to wait for apiserver health ...
	I0610 09:22:51.299835    1637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:22:51.445314    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:51.446212    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.490284    1637 system_pods.go:59] 11 kube-system pods found
	I0610 09:22:51.490295    1637 system_pods.go:61] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.490299    1637 system_pods.go:61] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.490303    1637 system_pods.go:61] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.490306    1637 system_pods.go:61] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.490311    1637 system_pods.go:61] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.490314    1637 system_pods.go:61] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.490317    1637 system_pods.go:61] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.490320    1637 system_pods.go:61] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.490323    1637 system_pods.go:61] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.490325    1637 system_pods.go:61] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.490336    1637 system_pods.go:61] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.490341    1637 system_pods.go:74] duration metric: took 190.503333ms to wait for pod list to return data ...
	I0610 09:22:51.490345    1637 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:22:51.687921    1637 default_sa.go:45] found service account: "default"
	I0610 09:22:51.687931    1637 default_sa.go:55] duration metric: took 197.581625ms for default service account to be created ...
	I0610 09:22:51.687935    1637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:22:51.890310    1637 system_pods.go:86] 11 kube-system pods found
	I0610 09:22:51.890320    1637 system_pods.go:89] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.890326    1637 system_pods.go:89] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.890330    1637 system_pods.go:89] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.890333    1637 system_pods.go:89] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.890336    1637 system_pods.go:89] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.890338    1637 system_pods.go:89] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.890341    1637 system_pods.go:89] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.890344    1637 system_pods.go:89] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.890349    1637 system_pods.go:89] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.890351    1637 system_pods.go:89] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.890354    1637 system_pods.go:89] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.890357    1637 system_pods.go:126] duration metric: took 202.419584ms to wait for k8s-apps to be running ...
	I0610 09:22:51.890363    1637 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:22:51.890418    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:51.897401    1637 system_svc.go:56] duration metric: took 7.035125ms WaitForService to wait for kubelet.
	I0610 09:22:51.897410    1637 kubeadm.go:581] duration metric: took 11.8330175s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:22:51.897420    1637 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:22:51.944537    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.945311    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.087254    1637 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:22:52.087281    1637 node_conditions.go:123] node cpu capacity is 2
	I0610 09:22:52.087290    1637 node_conditions.go:105] duration metric: took 189.867833ms to run NodePressure ...
	I0610 09:22:52.087295    1637 start.go:228] waiting for startup goroutines ...
	I0610 09:22:52.445279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:52.445610    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.945799    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.946052    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.445389    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.446014    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.945473    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.946237    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.446325    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.448382    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.948114    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.951263    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.447181    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.447511    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.945501    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.946418    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.445349    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.445910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.945410    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.946065    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.447469    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.448009    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.945353    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.946520    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454959    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.946148    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.947450    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.446206    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:00.447700    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.944434    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.945129    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.445646    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.446643    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:01.945710    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.947152    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.450730    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.454285    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.952960    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.955376    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.446358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.447878    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.945294    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.946290    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.445145    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.446164    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.946364    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.946514    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.449729    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.453690    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.947873    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.950281    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.445562    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.445795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.946136    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.947509    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.445951    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.446633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.945814    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.946157    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.446086    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.446099    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.970991    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.971383    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.448620    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:09.449087    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.946728    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.948250    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446827    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446978    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:10.945421    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.945732    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.444797    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.445621    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.948926    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.949262    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.452305    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:12.453786    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.948653    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.949795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.445378    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.446558    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:13.946404    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.946644    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.446073    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.446331    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.946569    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.946725    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.445689    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.446865    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.947373    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.948973    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.445756    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:16.446819    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.944171    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.945088    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.448798    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.450089    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:17.952301    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.955532    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.945244    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.946363    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.445300    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:19.445962    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944002    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944781    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.446084    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:20.446223    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.952440    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.954313    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.445625    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.446916    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.945782    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.947236    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.445836    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.446162    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.945365    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.946169    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.449820    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:23.452877    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.953442    1637 kapi.go:107] duration metric: took 37.512712584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 09:23:23.958122    1637 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-098000 cluster.
	I0610 09:23:23.957179    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.961932    1637 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 09:23:23.965925    1637 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 09:23:24.450360    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:24.945980    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.445712    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.946008    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.446034    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.950257    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.454943    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.956882    1637 kapi.go:107] duration metric: took 46.520505042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 09:28:39.510321    1637 kapi.go:107] duration metric: took 6m0.007516916s to wait for kubernetes.io/minikube-addons=registry ...
	W0610 09:28:39.510625    1637 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0610 09:28:39.531369    1637 kapi.go:107] duration metric: took 6m0.011549375s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0610 09:28:39.531491    1637 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0610 09:28:39.539250    1637 out.go:177] * Enabled addons: volumesnapshots, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, gcp-auth, csi-hostpath-driver
	I0610 09:28:39.545184    1637 addons.go:499] enable addons completed in 6m0.055013834s: enabled=[volumesnapshots inspektor-gadget metrics-server cloud-spanner storage-provisioner default-storageclass ingress-dns gcp-auth csi-hostpath-driver]
	I0610 09:28:39.545227    1637 start.go:233] waiting for cluster config update ...
	I0610 09:28:39.545256    1637 start.go:242] writing updated cluster config ...
	I0610 09:28:39.546371    1637 ssh_runner.go:195] Run: rm -f paused
	I0610 09:28:39.689843    1637 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:28:39.694186    1637 out.go:177] 
	W0610 09:28:39.697254    1637 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:28:39.701213    1637 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:28:39.709228    1637 out.go:177] * Done! kubectl is now configured to use "addons-098000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:45 UTC. --
	Jun 10 16:28:38 addons-098000 dockerd[939]: time="2023-06-10T16:28:38.780840261Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940455787Z" level=info msg="shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940486162Z" level=warning msg="cleaning up after shim disconnected" id=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.940492579Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.940636785Z" level=info msg="ignoring event" container=6653f298124092fb4cd1d9f2b0dada096339ecd7d6c528a34800580ffc4dcb13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[933]: time="2023-06-10T16:28:44.998241480Z" level=info msg="ignoring event" container=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998855056Z" level=info msg="shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998883722Z" level=warning msg="cleaning up after shim disconnected" id=8a862786595bf71720a966e2f18993267b6dea2d132b139c62fe8ba5e7a2b3af namespace=moby
	Jun 10 16:28:44 addons-098000 dockerd[939]: time="2023-06-10T16:28:44.998888056Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737483784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737542367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737565825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.737574241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:33:46 addons-098000 dockerd[933]: time="2023-06-10T16:33:46.778804028Z" level=info msg="ignoring event" container=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779017026Z" level=info msg="shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779051484Z" level=warning msg="cleaning up after shim disconnected" id=865c6a69de56800cd4232a829350cd25120f42585d22af84c86c1c4d84e8c6b4 namespace=moby
	Jun 10 16:33:46 addons-098000 dockerd[939]: time="2023-06-10T16:33:46.779056025Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747938173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.747997298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748248087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.748452960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:38:54 addons-098000 dockerd[933]: time="2023-06-10T16:38:54.805196171Z" level=info msg="ignoring event" container=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805369210Z" level=info msg="shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805425252Z" level=warning msg="cleaning up after shim disconnected" id=6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701 namespace=moby
	Jun 10 16:38:54 addons-098000 dockerd[939]: time="2023-06-10T16:38:54.805429585Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID
	6877b2a4c1b8b       1499ed4fbd0aa                                                                                                                                About a minute ago   Exited              minikube-ingress-dns                     8                   8e5b404496c4e
	23a8cae6443cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago       Running             csi-snapshotter                          0                   567c041b8040d
	1a73024f59864       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 17 minutes ago       Running             gcp-auth                                 0                   d8f3043938a40
	3fa8701fda26c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          17 minutes ago       Running             csi-provisioner                          0                   567c041b8040d
	aafd1d61dfe4b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            17 minutes ago       Running             liveness-probe                           0                   567c041b8040d
	2b6767dfbe9d3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           17 minutes ago       Running             hostpath                                 0                   567c041b8040d
	8f02984364568       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                17 minutes ago       Running             node-driver-registrar                    0                   567c041b8040d
	868cfa9fcba69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   17 minutes ago       Running             csi-external-health-monitor-controller   0                   567c041b8040d
	26cfafca2bb0d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              17 minutes ago       Running             csi-resizer                              0                   a78a427783820
	c58c2d26acda8       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             17 minutes ago       Running             csi-attacher                             0                   674b1cd12ae30
	46105da82f67a       ba04bb24b9575                                                                                                                                18 minutes ago       Running             storage-provisioner                      0                   67c7765a9fa6e
	de0a71571f8d0       29921a0845422                                                                                                                                18 minutes ago       Running             kube-proxy                               0                   2bc9129027615
	adfb52103967f       97e04611ad434                                                                                                                                18 minutes ago       Running             coredns                                  0                   d428f978de558
	335475d795fcf       305d7ed1dae28                                                                                                                                18 minutes ago       Running             kube-scheduler                           0                   31fdcf4abeef0
	3dcf946c301ce       2ee705380c3c5                                                                                                                                18 minutes ago       Running             kube-controller-manager                  0                   9fed8ca4bd2f8
	74423d2dab41d       72c9df6be7f1b                                                                                                                                18 minutes ago       Running             kube-apiserver                           0                   11d78b6999216
	2a81bf4413e12       24bc64e911039                                                                                                                                18 minutes ago       Running             etcd                                     0                   a20e51a803c8c
	
	* 
	* ==> coredns [adfb52103967] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46766 - 38334 "HINFO IN 1120296007274907072.5268654669647465865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004199511s
	[INFO] 10.244.0.10:39208 - 36576 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125s
	[INFO] 10.244.0.10:59425 - 64759 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155334s
	[INFO] 10.244.0.10:33915 - 19077 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037167s
	[INFO] 10.244.0.10:46994 - 65166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002725s
	[INFO] 10.244.0.10:46598 - 37414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043625s
	[INFO] 10.244.0.10:55204 - 18019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000032792s
	[INFO] 10.244.0.10:60613 - 7185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000939127s
	[INFO] 10.244.0.10:40293 - 55849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00103996s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-098000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-098000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-098000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:40:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:39:17 +0000   Sat, 10 Jun 2023 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 43359b33bc0f4b9c9610dd4ec5308f62
	  System UUID:                43359b33bc0f4b9c9610dd4ec5308f62
	  Boot ID:                    eb81fa5c-fe8f-47ab-b5e5-9f5fe2e987b0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-jkcxn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5d78c9869d-f2tnn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 csi-hostpathplugin-pjvh6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-addons-098000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-098000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-098000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-jpnqh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-098000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-098000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-098000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-098000 event: Registered Node addons-098000 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun10 16:22] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.696014] EINJ: EINJ table not found.
	[  +0.658239] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043798] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000807] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.876165] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.071972] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.924516] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +2.288987] systemd-fstab-generator[866]: Ignoring "noauto" for root device
	[  +0.165983] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +0.077870] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.072149] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +1.146266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099605] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
	[  +0.082038] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.080513] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.078963] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.086582] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	[  +3.056689] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.651414] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[ +14.757696] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.157496] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.873848] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 16:23] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [2a81bf4413e1] <==
	* {"level":"info","ts":"2023-06-10T16:22:22.463Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-098000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:32:22.450Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":974,"took":"2.490131ms","hash":4035340276}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035340276,"revision":974,"compact-revision":-1}
	{"level":"info","ts":"2023-06-10T16:37:22.461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1290}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1290,"took":"1.421443ms","hash":2326989487}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2326989487,"revision":1290,"compact-revision":974}
	
	* 
	* ==> gcp-auth [1a73024f5986] <==
	* 2023/06/10 16:23:23 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  16:40:45 up 18 min,  0 users,  load average: 0.55, 0.53, 0.41
	Linux addons-098000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74423d2dab41] <==
	* I0610 16:22:23.642323       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:22:23.642356       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:22:23.657792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:22:24.401560       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:22:24.563279       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:22:24.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:22:24.568654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:22:24.720978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:22:24.731371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:22:24.801810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:22:24.805350       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0610 16:22:24.806303       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:22:24.807740       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:22:25.583035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:22:26.059225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:22:26.063878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:22:26.068513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:22:39.217505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:22:39.917252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:22:40.754199       1 alloc.go:330] "allocated clusterIPs" service="default/cloud-spanner-emulator" clusterIPs=map[IPv4:10.99.222.169]
	I0610 16:22:41.357691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.106.85.14]
	I0610 16:22:41.362266       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0610 16:22:41.419673       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.111.90.60]
	I0610 16:22:46.394399       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.102.46.8]
	I0610 16:22:46.411449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [3dcf946c301c] <==
	* I0610 16:22:46.441438       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.444358       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:22:46.468051       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:09.211557       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:09.224222       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.225842       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:10.320708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.244592       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:11.258467       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.330708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.333357       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.335850       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:11.335887       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.336870       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.345682       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.251101       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.256393       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.263577       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:12.263691       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.265671       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.266556       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:41.027747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:41.050836       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:42.013412       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:42.047992       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [de0a71571f8d] <==
	* I0610 16:22:40.477801       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0610 16:22:40.477968       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0610 16:22:40.477988       1 server_others.go:551] "Using iptables proxy"
	I0610 16:22:40.508315       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:22:40.508325       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:22:40.508357       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:22:40.508608       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:22:40.508614       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:22:40.509861       1 config.go:188] "Starting service config controller"
	I0610 16:22:40.509869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:22:40.509881       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:22:40.509882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:22:40.511342       1 config.go:315] "Starting node config controller"
	I0610 16:22:40.511347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:22:40.609918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:22:40.609943       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:22:40.611397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [335475d795fc] <==
	* W0610 16:22:23.606482       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:22:23.606891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:22:23.606959       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:22:23.606982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:22:23.607008       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607026       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607067       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:23.607087       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:23.607166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:23.607268       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.463642       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:22:24.463731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:22:24.485768       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:22:24.485809       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:22:24.588161       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:24.588197       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:24.600064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:24.600158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.604631       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:22:24.604651       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:22:24.616055       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:22:24.616131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:22:27.098734       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:40:45 UTC. --
	Jun 10 16:38:55 addons-098000 kubelet[2091]: E0610 16:38:55.162992    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:07 addons-098000 kubelet[2091]: I0610 16:39:07.681259    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:07 addons-098000 kubelet[2091]: E0610 16:39:07.682935    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:19 addons-098000 kubelet[2091]: I0610 16:39:19.682302    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:19 addons-098000 kubelet[2091]: E0610 16:39:19.683995    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:25 addons-098000 kubelet[2091]: E0610 16:39:25.689415    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:39:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:39:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:39:32 addons-098000 kubelet[2091]: I0610 16:39:32.680792    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:32 addons-098000 kubelet[2091]: E0610 16:39:32.681284    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:39:47 addons-098000 kubelet[2091]: I0610 16:39:47.681883    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:39:47 addons-098000 kubelet[2091]: E0610 16:39:47.684253    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:02 addons-098000 kubelet[2091]: I0610 16:40:02.681991    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:02 addons-098000 kubelet[2091]: E0610 16:40:02.683097    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:13 addons-098000 kubelet[2091]: I0610 16:40:13.680609    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:13 addons-098000 kubelet[2091]: E0610 16:40:13.680899    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:25 addons-098000 kubelet[2091]: E0610 16:40:25.787611    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:40:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:40:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:40:27 addons-098000 kubelet[2091]: I0610 16:40:27.681515    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:27 addons-098000 kubelet[2091]: E0610 16:40:27.682723    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:40:41 addons-098000 kubelet[2091]: I0610 16:40:41.681329    2091 scope.go:115] "RemoveContainer" containerID="6877b2a4c1b8beedaecb2d8ff7f51b97cf361d0560c53fa4467b870d2d5bf701"
	Jun 10 16:40:41 addons-098000 kubelet[2091]: E0610 16:40:41.682958    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	
	* 
	* ==> storage-provisioner [46105da82f67] <==
	* I0610 16:22:41.552997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:22:41.564566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:22:41.564604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:22:41.567070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:22:41.567242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b8b8b2f-e69f-4abd-8693-9c0a331852aa", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-098000_976d826c-217e-4d0d-87e7-e825dd783783 became leader
	I0610 16:22:41.567336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	I0610 16:22:41.668274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (720.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (387.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 3.007459ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-098000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-098000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-098000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6cac190f-66c9-4f6e-9f1f-d41f4d21471e] Pending
helpers_test.go:344: "task-pv-pod" [6cac190f-66c9-4f6e-9f1f-d41f4d21471e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6cac190f-66c9-4f6e-9f1f-d41f4d21471e] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.011736375s
addons_test.go:560: (dbg) Run:  kubectl --context addons-098000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:560: (dbg) Non-zero exit: kubectl --context addons-098000 create -f testdata/csi-hostpath-driver/snapshot.yaml: exit status 1 (106.587ms)

                                                
                                                
** stderr ** 
	error: resource mapping not found for name: "new-snapshot-demo" namespace: "" from "testdata/csi-hostpath-driver/snapshot.yaml": no matches for kind "VolumeSnapshot" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first

                                                
                                                
** /stderr **
addons_test.go:562: creating pod with kubectl --context addons-098000 create -f testdata/csi-hostpath-driver/snapshot.yaml failed: exit status 1
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (47.976542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.843375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.493875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.257625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.571625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (72.767791ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.826625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (49.291417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (63.559708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (71.678875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.854458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (73.958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (78.168875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (70.811792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.808291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.964292ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.63825ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.782792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (55.968334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.208042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (77.004208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.799416ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.787459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.486125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.059792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.728291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (64.5405ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.364334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.112917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.711916ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.106917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.350791ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.244542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.365125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.615917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.668958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (61.336ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.01675ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.09675ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.324125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.118208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.415958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (75.905834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.074875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.293125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.6995ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.747791ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.928ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.303833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.199792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.150125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.144917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.137ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.424917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.342333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.7165ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.864167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (77.453667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.049083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.971834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.2145ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.729542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.527542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.491792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.679625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.830333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.839875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.651125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.872167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.912417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.518667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.332625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.42875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.375708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (80.738167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.654208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.080708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.946208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.168917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.426917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.306583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.296125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.8245ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.135958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.574875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.418ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.935625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.094875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.543667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.348708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.875958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.249834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.763666ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.100833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.402209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.816917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.435792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.803459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.821333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.900542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (99.090958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.643833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.825417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.092ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.5765ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.134583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.454333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (70.578833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.028375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.680458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.153792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.892125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (75.928125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (66.482875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.6905ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.309167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.895458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.322542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (62.246125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (66.473375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.036167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.534209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.457625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.988417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.507333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.187042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.855375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.391625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.017583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (46.898042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.422625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.911167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.136208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.46825ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.910167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.770583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.74975ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.891584ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.522875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.546333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (101.761834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.645666ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.955083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.813416ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.587542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.914ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.124125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.771667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.434417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.596709ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.984041ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.96525ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.083333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.678167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.871167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.494167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.266333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.982292ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.580917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.79225ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.2755ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.006625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.975375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.704834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.857458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.4605ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.845875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.976334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (99.126708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.908334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.863375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.059791ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.270208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.406625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.378083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.274208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.924291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.242291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.394834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.942667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.218625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.883959ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.76825ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.138584ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.529167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.946958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.658625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.750833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.277375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.272041ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.440208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.390708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.518084ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.880209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.581583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.321083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.4735ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.587917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.477917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.341208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.845083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.342959ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.620917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.247292ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.919042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.612625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.601625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.995375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (67.771958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.521209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.430875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.723583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (80.65725ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.369458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.930542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.471875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.776916ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.4355ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (76.1695ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.579ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.451167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.006125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.157083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.627708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.273042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.1415ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.584708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.544667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.004333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (57.308042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (55.47275ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.188542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.372ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.609291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.775417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.015ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (62.928125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.580416ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (77.117666ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.23125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (62.292792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.740709ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (77.71025ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.341209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (98.101125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.563459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.080875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.786542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.594ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.206667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.1295ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (99.963417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.218125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.530458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.889334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (97.076ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.80375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (73.54675ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.018625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.1135ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.239542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.197125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (82.568125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.765542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.56075ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.45725ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.871625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.475208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.516333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.799958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.597708ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.04475ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.971333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.515042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.915042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.244875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.554875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.342167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.34275ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.332375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (78.006959ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.925167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.3005ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.898958ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.972417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (86.162166ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.241625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.575792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.018875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.531083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.705959ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.677834ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.613083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.828416ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.192459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.648208ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.268792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.059333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.642667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (97.443167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.656625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.890417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (81.962583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (97.687291ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.241917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.04425ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (97.11125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.499083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.163875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.854417ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (83.280792ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (78.871583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.958917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.875333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.117709ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.583459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.767917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.036209ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.155167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.921875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (96.588375ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (98.294667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.825083ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.870833ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.652292ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.132167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.157125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (79.489167ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.371583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.99325ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (76.03175ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (94.308625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.708334ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (93.182459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.116458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (71.454917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.041333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (76.843625ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.831084ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.707583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (48.387042ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (55.904ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.161542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.770583ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.309875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (68.217875ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (89.827667ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (63.457292ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (92.488125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (90.824333ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (73.830084ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (63.924542ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (87.822125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (88.270458ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (84.780459ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (85.658917ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.4505ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (78.3125ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (91.593916ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
helpers_test.go:419: (dbg) Run:  kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Non-zero exit: kubectl --context addons-098000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default: exit status 1 (95.65325ms)

                                                
                                                
** stderr ** 
	error: the server doesn't have a resource type "volumesnapshot"

                                                
                                                
** /stderr **
helpers_test.go:421: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: exit status 1
addons_test.go:566: failed waiting for volume snapshot new-snapshot-demo: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-098000 -n addons-098000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-098000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| delete  | -p download-only-879000        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | --download-only -p             | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |                     |
	|         | binary-mirror-025000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000        | binary-mirror-025000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:21 PDT |
	| start   | -p addons-098000               | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT | 10 Jun 23 09:28 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:28 PDT | 10 Jun 23 09:28 PDT |
	|         | addons-098000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-098000        | jenkins | v1.30.1 | 10 Jun 23 09:40 PDT | 10 Jun 23 09:40 PDT |
	|         | -p addons-098000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:54.764352    1637 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:54.764757    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764761    1637 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:54.764764    1637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:54.764861    1637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:21:54.766294    1637 out.go:303] Setting JSON to false
	I0610 09:21:54.781540    1637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1285,"bootTime":1686412829,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:54.781615    1637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:54.786460    1637 out.go:177] * [addons-098000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:54.793542    1637 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:21:54.798440    1637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:54.793561    1637 notify.go:220] Checking for updates...
	I0610 09:21:54.804413    1637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:54.807450    1637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:54.810460    1637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:21:54.811765    1637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:21:54.814627    1637 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:54.818412    1637 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:21:54.823426    1637 start.go:297] selected driver: qemu2
	I0610 09:21:54.823432    1637 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:54.823441    1637 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:21:54.825256    1637 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:54.828578    1637 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:21:54.831535    1637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:21:54.831554    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:21:54.831575    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:54.831579    1637 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:21:54.831586    1637 start_flags.go:319] config:
	{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:54.831700    1637 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:54.840445    1637 out.go:177] * Starting control plane node addons-098000 in cluster addons-098000
	I0610 09:21:54.844425    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:54.844451    1637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:54.844469    1637 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:54.844530    1637 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:21:54.844535    1637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:21:54.844735    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:21:54.844750    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json: {Name:mkfbe060a3258f68fbe8b01ce26e4a7ada2f24f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:54.844947    1637 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:21:54.844969    1637 start.go:364] acquiring machines lock for addons-098000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:21:54.845063    1637 start.go:368] acquired machines lock for "addons-098000" in 89.292µs
	I0610 09:21:54.845075    1637 start.go:93] Provisioning new machine with config: &{Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:21:54.845115    1637 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:21:54.853376    1637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:21:55.217388    1637 start.go:159] libmachine.API.Create for "addons-098000" (driver="qemu2")
	I0610 09:21:55.217427    1637 client.go:168] LocalClient.Create starting
	I0610 09:21:55.217549    1637 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:21:55.301145    1637 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:21:55.414002    1637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:21:55.826273    1637 main.go:141] libmachine: Creating SSH key...
	I0610 09:21:55.859428    1637 main.go:141] libmachine: Creating Disk image...
	I0610 09:21:55.859434    1637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:21:55.859612    1637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.941560    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:55.941581    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.941655    1637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2 +20000M
	I0610 09:21:55.948999    1637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:21:55.949013    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:55.949042    1637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:55.949049    1637 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:21:55.949080    1637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e2:60:7a:4e:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/disk.qcow2
	I0610 09:21:56.034280    1637 main.go:141] libmachine: STDOUT: 
	I0610 09:21:56.034334    1637 main.go:141] libmachine: STDERR: 
	I0610 09:21:56.034338    1637 main.go:141] libmachine: Attempt 0
	I0610 09:21:56.034355    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:21:58.036587    1637 main.go:141] libmachine: Attempt 1
	I0610 09:21:58.036664    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:00.038868    1637 main.go:141] libmachine: Attempt 2
	I0610 09:22:00.038909    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:02.040980    1637 main.go:141] libmachine: Attempt 3
	I0610 09:22:02.040996    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:04.043076    1637 main.go:141] libmachine: Attempt 4
	I0610 09:22:04.043113    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:06.045175    1637 main.go:141] libmachine: Attempt 5
	I0610 09:22:06.045200    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047388    1637 main.go:141] libmachine: Attempt 6
	I0610 09:22:08.047472    1637 main.go:141] libmachine: Searching for c2:e2:60:7a:4e:46 in /var/db/dhcpd_leases ...
	I0610 09:22:08.047875    1637 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 09:22:08.047987    1637 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6485f4af}
	I0610 09:22:08.048012    1637 main.go:141] libmachine: Found match: c2:e2:60:7a:4e:46
	I0610 09:22:08.048053    1637 main.go:141] libmachine: IP: 192.168.105.2
	I0610 09:22:08.048083    1637 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 09:22:10.069705    1637 machine.go:88] provisioning docker machine ...
	I0610 09:22:10.069788    1637 buildroot.go:166] provisioning hostname "addons-098000"
	I0610 09:22:10.070644    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.071570    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.071588    1637 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-098000 && echo "addons-098000" | sudo tee /etc/hostname
	I0610 09:22:10.164038    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-098000
	
	I0610 09:22:10.164160    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.164626    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.164641    1637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:22:10.239261    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:22:10.239281    1637 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:22:10.239300    1637 buildroot.go:174] setting up certificates
	I0610 09:22:10.239307    1637 provision.go:83] configureAuth start
	I0610 09:22:10.239314    1637 provision.go:138] copyHostCerts
	I0610 09:22:10.239507    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:22:10.240632    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:22:10.241010    1637 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:22:10.241260    1637 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.addons-098000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-098000]
	I0610 09:22:10.307069    1637 provision.go:172] copyRemoteCerts
	I0610 09:22:10.307140    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:22:10.307172    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.339991    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:22:10.346931    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 09:22:10.353742    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:22:10.360626    1637 provision.go:86] duration metric: configureAuth took 121.313416ms
	I0610 09:22:10.360639    1637 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:22:10.361002    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:10.361055    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.361272    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.361276    1637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:22:10.420194    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:22:10.420201    1637 buildroot.go:70] root file system type: tmpfs
	I0610 09:22:10.420251    1637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:22:10.420295    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.420542    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.420577    1637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:22:10.485025    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:22:10.485070    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.485298    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.485310    1637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:22:10.830569    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:22:10.830580    1637 machine.go:91] provisioned docker machine in 760.843209ms
	I0610 09:22:10.830585    1637 client.go:171] LocalClient.Create took 15.613176541s
	I0610 09:22:10.830594    1637 start.go:167] duration metric: libmachine.API.Create for "addons-098000" took 15.613236583s
	I0610 09:22:10.830598    1637 start.go:300] post-start starting for "addons-098000" (driver="qemu2")
	I0610 09:22:10.830601    1637 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:22:10.830682    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:22:10.830692    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.862119    1637 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:22:10.863469    1637 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:22:10.863478    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:22:10.863540    1637 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:22:10.863565    1637 start.go:303] post-start completed in 32.963459ms
	I0610 09:22:10.863901    1637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/config.json ...
	I0610 09:22:10.864045    1637 start.go:128] duration metric: createHost completed in 16.018950083s
	I0610 09:22:10.864069    1637 main.go:141] libmachine: Using SSH client type: native
	I0610 09:22:10.864287    1637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1029286d0] 0x10292b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 09:22:10.864291    1637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:22:10.923434    1637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686414131.384712585
	
	I0610 09:22:10.923441    1637 fix.go:207] guest clock: 1686414131.384712585
	I0610 09:22:10.923446    1637 fix.go:220] Guest: 2023-06-10 09:22:11.384712585 -0700 PDT Remote: 2023-06-10 09:22:10.864048 -0700 PDT m=+16.118188126 (delta=520.664585ms)
	I0610 09:22:10.923456    1637 fix.go:191] guest clock delta is within tolerance: 520.664585ms
	I0610 09:22:10.923459    1637 start.go:83] releasing machines lock for "addons-098000", held for 16.0784145s
	I0610 09:22:10.923756    1637 ssh_runner.go:195] Run: cat /version.json
	I0610 09:22:10.923765    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:10.923833    1637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:22:10.923872    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:11.040251    1637 ssh_runner.go:195] Run: systemctl --version
	I0610 09:22:11.042905    1637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:22:11.045415    1637 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:22:11.045461    1637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:22:11.051643    1637 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:22:11.051653    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:11.051736    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:11.061365    1637 docker.go:633] Got preloaded images: 
	I0610 09:22:11.061374    1637 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:22:11.061418    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:11.064624    1637 ssh_runner.go:195] Run: which lz4
	I0610 09:22:11.066056    1637 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:22:11.067511    1637 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:22:11.067524    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:22:12.384653    1637 docker.go:597] Took 1.318649 seconds to copy over tarball
	I0610 09:22:12.384711    1637 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:22:13.518722    1637 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.133975834s)
	I0610 09:22:13.518746    1637 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:22:13.534141    1637 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:22:13.537423    1637 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:22:13.542380    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:13.617910    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:15.783768    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.165840375s)
	I0610 09:22:15.783797    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.783942    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.789136    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:22:15.792061    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:22:15.794990    1637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:22:15.795014    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:22:15.798511    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.801745    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:22:15.804884    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:22:15.807635    1637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:22:15.810661    1637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:22:15.814158    1637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:22:15.817306    1637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:22:15.819948    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:15.905204    1637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:22:15.910905    1637 start.go:481] detecting cgroup driver to use...
	I0610 09:22:15.910988    1637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:22:15.916986    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.922219    1637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:22:15.929205    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:22:15.933866    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.938677    1637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:22:15.974269    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:22:15.979243    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:22:15.984512    1637 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:22:15.985792    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:22:15.988369    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:22:15.993006    1637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:22:16.073036    1637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:22:16.147707    1637 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:22:16.147726    1637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:22:16.152764    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:16.219604    1637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:22:17.389947    1637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.170326875s)
	I0610 09:22:17.390012    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.468450    1637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:22:17.548751    1637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:22:17.629562    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.707590    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:22:17.714930    1637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:22:17.794794    1637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:22:17.819341    1637 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:22:17.819427    1637 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:22:17.821557    1637 start.go:549] Will wait 60s for crictl version
	I0610 09:22:17.821591    1637 ssh_runner.go:195] Run: which crictl
	I0610 09:22:17.825207    1637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:22:17.842430    1637 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:22:17.842501    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.850299    1637 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:22:17.866701    1637 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:22:17.866866    1637 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:22:17.868327    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.871885    1637 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:22:17.871927    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.877489    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.877499    1637 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:22:17.877550    1637 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:22:17.883143    1637 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:22:17.883157    1637 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:22:17.883198    1637 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:22:17.890410    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:17.890420    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:17.890445    1637 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:22:17.890455    1637 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-098000 NodeName:addons-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:22:17.890526    1637 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-098000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:22:17.890573    1637 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:22:17.890631    1637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:22:17.893850    1637 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:22:17.893880    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:22:17.896724    1637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0610 09:22:17.901642    1637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:22:17.906483    1637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0610 09:22:17.911373    1637 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0610 09:22:17.912694    1637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:22:17.916067    1637 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000 for IP: 192.168.105.2
	I0610 09:22:17.916076    1637 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:17.916236    1637 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:22:18.022564    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt ...
	I0610 09:22:18.022569    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt: {Name:mk821d9de36f93438ad430683cb25e2f1c33c9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022803    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key ...
	I0610 09:22:18.022806    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key: {Name:mk750eea32c0b02b6ad84d81711cbfd77ceefe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.022913    1637 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:22:18.159699    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt ...
	I0610 09:22:18.159708    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt: {Name:mk10e39bee2c5c6785228bc7733548a740243d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.159914    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key ...
	I0610 09:22:18.159917    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key: {Name:mk04d776031cd8d2755a757ba7736e35a9c25212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.160037    1637 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key
	I0610 09:22:18.160044    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt with IP's: []
	I0610 09:22:18.246526    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt ...
	I0610 09:22:18.246530    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: {Name:mk301aca75dad20ac385eb683aae1662edff3d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246697    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key ...
	I0610 09:22:18.246700    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.key: {Name:mkdf4a2bc618a029a53fbd786e41dffe68b8316b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.246803    1637 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969
	I0610 09:22:18.246812    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:22:18.411436    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 ...
	I0610 09:22:18.411440    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969: {Name:mk922ab871b245e2b8e7e4b2a109a553fe1bcc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411596    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 ...
	I0610 09:22:18.411599    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969: {Name:mkdde2defc189629d0924fe6871b2adb52e47c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.411697    1637 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt
	I0610 09:22:18.411933    1637 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key
	I0610 09:22:18.412033    1637 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key
	I0610 09:22:18.412047    1637 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt with IP's: []
	I0610 09:22:18.578568    1637 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt ...
	I0610 09:22:18.578583    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt: {Name:mkb4544f3ff14d84a98fd9ec92bfcdbb5d50e84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.578783    1637 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key ...
	I0610 09:22:18.578786    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key: {Name:mk82ce3998197ea814bf8f591a5b4b56c617f405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:18.579030    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:22:18.579468    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:22:18.579491    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:22:18.579672    1637 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:22:18.580285    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:22:18.587660    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:22:18.594728    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:22:18.602219    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:22:18.609690    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:22:18.617442    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:22:18.624297    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:22:18.631049    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:22:18.638070    1637 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:22:18.644969    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:22:18.650094    1637 ssh_runner.go:195] Run: openssl version
	I0610 09:22:18.652167    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:22:18.655090    1637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656540    1637 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.656561    1637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:22:18.658363    1637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:22:18.661572    1637 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:22:18.662872    1637 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:22:18.662908    1637 kubeadm.go:404] StartCluster: {Name:addons-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-098000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:22:18.662975    1637 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:22:18.668496    1637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:22:18.671389    1637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:22:18.674606    1637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:22:18.677626    1637 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:22:18.677644    1637 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:22:18.703158    1637 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:22:18.703188    1637 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:22:18.757797    1637 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:22:18.757860    1637 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:22:18.757910    1637 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:22:18.816123    1637 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:22:18.821365    1637 out.go:204]   - Generating certificates and keys ...
	I0610 09:22:18.821409    1637 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:22:18.821441    1637 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:22:19.085233    1637 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:22:19.181413    1637 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:22:19.330348    1637 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:22:19.412707    1637 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:22:19.604000    1637 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:22:19.604069    1637 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.814398    1637 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:22:19.814478    1637 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-098000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0610 09:22:19.907005    1637 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:22:20.056367    1637 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:22:20.125295    1637 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:22:20.125333    1637 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:22:20.241297    1637 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:22:20.330399    1637 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:22:20.489216    1637 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:22:20.764229    1637 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:22:20.771051    1637 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:22:20.771103    1637 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:22:20.771135    1637 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:22:20.859965    1637 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:22:20.864105    1637 out.go:204]   - Booting up control plane ...
	I0610 09:22:20.864178    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:22:20.864224    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:22:20.864257    1637 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:22:20.864302    1637 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:22:20.865267    1637 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:22:24.366796    1637 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501337 seconds
	I0610 09:22:24.366861    1637 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:22:24.372204    1637 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:22:24.898455    1637 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:22:24.898779    1637 kubeadm.go:322] [mark-control-plane] Marking the node addons-098000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:22:25.404043    1637 kubeadm.go:322] [bootstrap-token] Using token: 8xmw5d.kvohdu7dlcpn05ob
	I0610 09:22:25.410608    1637 out.go:204]   - Configuring RBAC rules ...
	I0610 09:22:25.410669    1637 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:22:25.411737    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:22:25.418545    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:22:25.419904    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:22:25.421252    1637 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:22:25.422283    1637 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:22:25.427205    1637 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:22:25.603958    1637 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:22:25.815834    1637 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:22:25.816185    1637 kubeadm.go:322] 
	I0610 09:22:25.816225    1637 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:22:25.816233    1637 kubeadm.go:322] 
	I0610 09:22:25.816291    1637 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:22:25.816295    1637 kubeadm.go:322] 
	I0610 09:22:25.816308    1637 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:22:25.816346    1637 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:22:25.816388    1637 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:22:25.816392    1637 kubeadm.go:322] 
	I0610 09:22:25.816425    1637 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:22:25.816430    1637 kubeadm.go:322] 
	I0610 09:22:25.816463    1637 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:22:25.816466    1637 kubeadm.go:322] 
	I0610 09:22:25.816508    1637 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:22:25.816560    1637 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:22:25.816602    1637 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:22:25.816605    1637 kubeadm.go:322] 
	I0610 09:22:25.816653    1637 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:22:25.816694    1637 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:22:25.816699    1637 kubeadm.go:322] 
	I0610 09:22:25.816749    1637 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.816801    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:22:25.816815    1637 kubeadm.go:322] 	--control-plane 
	I0610 09:22:25.816823    1637 kubeadm.go:322] 
	I0610 09:22:25.816880    1637 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:22:25.816883    1637 kubeadm.go:322] 
	I0610 09:22:25.816931    1637 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xmw5d.kvohdu7dlcpn05ob \
	I0610 09:22:25.817003    1637 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:22:25.817072    1637 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:22:25.817175    1637 kubeadm.go:322] W0610 16:22:19.219117    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817283    1637 kubeadm.go:322] W0610 16:22:21.323610    1314 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:22:25.817294    1637 cni.go:84] Creating CNI manager for ""
	I0610 09:22:25.817303    1637 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:22:25.823848    1637 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:22:25.826928    1637 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:22:25.830443    1637 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:22:25.836316    1637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:22:25.836378    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.836393    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=addons-098000 minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:25.900338    1637 ops.go:34] apiserver oom_adj: -16
	I0610 09:22:25.900382    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.433306    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:26.933284    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.433115    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:27.933305    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.433535    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:28.933493    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.433524    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:29.932908    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.433563    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:30.933551    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.433517    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:31.933506    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.433459    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:32.933537    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.433223    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:33.933503    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.432603    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:34.933481    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.433267    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:35.933228    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.433253    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:36.933272    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.433226    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:37.933202    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.431772    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:38.933197    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.432078    1637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:22:39.482163    1637 kubeadm.go:1076] duration metric: took 13.645838667s to wait for elevateKubeSystemPrivileges.
	I0610 09:22:39.482178    1637 kubeadm.go:406] StartCluster complete in 20.819301625s
	I0610 09:22:39.482188    1637 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482341    1637 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:22:39.482516    1637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:22:39.482746    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:22:39.482786    1637 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 09:22:39.482870    1637 addons.go:66] Setting volumesnapshots=true in profile "addons-098000"
	I0610 09:22:39.482872    1637 addons.go:66] Setting inspektor-gadget=true in profile "addons-098000"
	I0610 09:22:39.482879    1637 addons.go:228] Setting addon volumesnapshots=true in "addons-098000"
	I0610 09:22:39.482922    1637 addons.go:66] Setting registry=true in profile "addons-098000"
	I0610 09:22:39.482902    1637 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-098000"
	I0610 09:22:39.482936    1637 addons.go:228] Setting addon registry=true in "addons-098000"
	I0610 09:22:39.482958    1637 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:39.482880    1637 addons.go:228] Setting addon inspektor-gadget=true in "addons-098000"
	I0610 09:22:39.482979    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482984    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.482878    1637 addons.go:66] Setting gcp-auth=true in profile "addons-098000"
	I0610 09:22:39.483016    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483020    1637 mustload.go:65] Loading cluster: addons-098000
	I0610 09:22:39.483034    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.483276    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.483275    1637 config.go:182] Loaded profile config "addons-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:22:39.482885    1637 addons.go:66] Setting ingress=true in profile "addons-098000"
	I0610 09:22:39.483383    1637 addons.go:228] Setting addon ingress=true in "addons-098000"
	I0610 09:22:39.483423    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.483508    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483523    1637 addons.go:274] "addons-098000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0610 09:22:39.483525    1637 addons.go:464] Verifying addon registry=true in "addons-098000"
	W0610 09:22:39.483511    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483543    1637 addons.go:274] "addons-098000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0610 09:22:39.487787    1637 out.go:177] * Verifying registry addon...
	I0610 09:22:39.482886    1637 addons.go:66] Setting default-storageclass=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting cloud-spanner=true in profile "addons-098000"
	I0610 09:22:39.482888    1637 addons.go:66] Setting ingress-dns=true in profile "addons-098000"
	I0610 09:22:39.482892    1637 addons.go:66] Setting storage-provisioner=true in profile "addons-098000"
	I0610 09:22:39.482899    1637 addons.go:66] Setting metrics-server=true in profile "addons-098000"
	W0610 09:22:39.483773    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	W0610 09:22:39.483867    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.484558    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.494895    1637 addons.go:228] Setting addon ingress-dns=true in "addons-098000"
	I0610 09:22:39.494904    1637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-098000"
	I0610 09:22:39.494907    1637 addons.go:228] Setting addon metrics-server=true in "addons-098000"
	I0610 09:22:39.494911    1637 addons.go:228] Setting addon cloud-spanner=true in "addons-098000"
	I0610 09:22:39.494913    1637 addons.go:228] Setting addon storage-provisioner=true in "addons-098000"
	W0610 09:22:39.494917    1637 addons.go:274] "addons-098000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0610 09:22:39.494920    1637 addons.go:274] "addons-098000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0610 09:22:39.495382    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 09:22:39.500831    1637 addons.go:464] Verifying addon ingress=true in "addons-098000"
	I0610 09:22:39.500842    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500849    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.504830    1637 out.go:177] * Verifying ingress addon...
	I0610 09:22:39.500952    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.500997    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 09:22:39.501041    1637 host.go:66] Checking if "addons-098000" exists ...
	W0610 09:22:39.501118    1637 host.go:54] host status for "addons-098000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/monitor: connect: connection refused
	I0610 09:22:39.514859    1637 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0610 09:22:39.511954    1637 addons.go:274] "addons-098000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0610 09:22:39.512421    1637 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 09:22:39.517592    1637 addons.go:228] Setting addon default-storageclass=true in "addons-098000"
	I0610 09:22:39.517921    1637 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.518096    1637 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 09:22:39.521871    1637 addons.go:464] Verifying addon metrics-server=true in "addons-098000"
	I0610 09:22:39.527803    1637 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 09:22:39.528897    1637 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 09:22:39.533879    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:22:39.533885    1637 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 09:22:39.533900    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:39.539950    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 09:22:39.549899    1637 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.549908    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 09:22:39.549915    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540014    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.540659    1637 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.550015    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:22:39.550019    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.552885    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 09:22:39.545910    1637 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.547022    1637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:22:39.555818    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 09:22:39.555836    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.558872    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 09:22:39.563787    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 09:22:39.565032    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 09:22:39.576758    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 09:22:39.585719    1637 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 09:22:39.588857    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 09:22:39.588866    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 09:22:39.588875    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:39.610676    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 09:22:39.641637    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:22:39.644621    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:22:39.683769    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 09:22:39.740787    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 09:22:39.740799    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 09:22:39.840307    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 09:22:39.840321    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 09:22:39.985655    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 09:22:39.985667    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 09:22:40.064364    1637 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-098000" context rescaled to 1 replicas
	I0610 09:22:40.064382    1637 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:22:40.068539    1637 out.go:177] * Verifying Kubernetes components...
	I0610 09:22:40.077600    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:40.261757    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 09:22:40.261768    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 09:22:40.290415    1637 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 09:22:40.290425    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 09:22:40.300542    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 09:22:40.300551    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 09:22:40.308642    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 09:22:40.308652    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 09:22:40.313342    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 09:22:40.313353    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 09:22:40.318717    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 09:22:40.318725    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 09:22:40.323460    1637 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.323466    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 09:22:40.335717    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 09:22:40.661069    1637 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105262875s)
	I0610 09:22:40.661101    1637 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:22:40.737190    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.12650025s)
	I0610 09:22:40.873352    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23170025s)
	I0610 09:22:40.873360    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228730125s)
	I0610 09:22:40.873397    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.189617792s)
	I0610 09:22:40.873843    1637 node_ready.go:35] waiting up to 6m0s for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875337    1637 node_ready.go:49] node "addons-098000" has status "Ready":"True"
	I0610 09:22:40.875343    1637 node_ready.go:38] duration metric: took 1.493375ms waiting for node "addons-098000" to be "Ready" ...
	I0610 09:22:40.875346    1637 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:40.878632    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881351    1637 pod_ready.go:92] pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:40.881360    1637 pod_ready.go:81] duration metric: took 2.720875ms waiting for pod "coredns-5d78c9869d-f2tnn" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:40.881363    1637 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:41.422744    1637 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.08700475s)
	I0610 09:22:41.422764    1637 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-098000"
	I0610 09:22:41.429025    1637 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 09:22:41.436428    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 09:22:41.441210    1637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 09:22:41.441218    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:41.945707    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.446004    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:42.891987    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:42.949163    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.445226    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:43.945705    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.445736    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:44.893909    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:44.949633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.445855    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:45.945805    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.106349    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 09:22:46.106363    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.140536    1637 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 09:22:46.145624    1637 addons.go:228] Setting addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.145643    1637 host.go:66] Checking if "addons-098000" exists ...
	I0610 09:22:46.146378    1637 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 09:22:46.146386    1637 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/addons-098000/id_rsa Username:docker}
	I0610 09:22:46.179928    1637 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 09:22:46.183883    1637 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 09:22:46.187898    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 09:22:46.187903    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 09:22:46.192588    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 09:22:46.192594    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 09:22:46.199251    1637 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.199256    1637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 09:22:46.204462    1637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 09:22:46.429785    1637 addons.go:464] Verifying addon gcp-auth=true in "addons-098000"
	I0610 09:22:46.434320    1637 out.go:177] * Verifying gcp-auth addon...
	I0610 09:22:46.440768    1637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 09:22:46.443515    1637 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 09:22:46.443521    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:46.446140    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949654    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:46.949910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.389319    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:47.445303    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.446055    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:47.946177    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:47.946875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.446743    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.447103    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:48.945711    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:48.946918    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.389715    1637 pod_ready.go:102] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status "Ready":"False"
	I0610 09:22:49.445862    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.448994    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:49.945095    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:49.945638    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.446626    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.446936    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:50.887650    1637 pod_ready.go:97] pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887663    1637 pod_ready.go:81] duration metric: took 10.00631125s waiting for pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace to be "Ready" ...
	E0610 09:22:50.887668    1637 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hpvv2" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:40 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-10 09:22:39 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 PodIP: PodIPs:[] StartTime:2023-06-10 09:22:40 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-10 09:22:40 -0700 PDT,FinishedAt:2023-06-10 09:22:50 -0700 PDT,ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://a6927aac3d3f923633152dfcff1a5f77d2f8691a71a43ff52f53791e6d29780f Started:0x1400191b730 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0610 09:22:50.887672    1637 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890299    1637 pod_ready.go:92] pod "etcd-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.890307    1637 pod_ready.go:81] duration metric: took 2.63175ms waiting for pod "etcd-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.890310    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892694    1637 pod_ready.go:92] pod "kube-apiserver-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.892699    1637 pod_ready.go:81] duration metric: took 2.386083ms waiting for pod "kube-apiserver-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.892703    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895043    1637 pod_ready.go:92] pod "kube-controller-manager-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.895049    1637 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "kube-controller-manager-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.895053    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897341    1637 pod_ready.go:92] pod "kube-proxy-jpnqh" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:50.897346    1637 pod_ready.go:81] duration metric: took 2.29075ms waiting for pod "kube-proxy-jpnqh" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.897350    1637 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:50.945358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:50.946279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.288420    1637 pod_ready.go:92] pod "kube-scheduler-addons-098000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:22:51.288430    1637 pod_ready.go:81] duration metric: took 391.078333ms waiting for pod "kube-scheduler-addons-098000" in "kube-system" namespace to be "Ready" ...
	I0610 09:22:51.288436    1637 pod_ready.go:38] duration metric: took 10.413098792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:22:51.288445    1637 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:22:51.288516    1637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:22:51.295818    1637 api_server.go:72] duration metric: took 11.231423584s to wait for apiserver process to appear ...
	I0610 09:22:51.295824    1637 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:22:51.295831    1637 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0610 09:22:51.299125    1637 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0610 09:22:51.299826    1637 api_server.go:141] control plane version: v1.27.2
	I0610 09:22:51.299832    1637 api_server.go:131] duration metric: took 4.005625ms to wait for apiserver health ...
	I0610 09:22:51.299835    1637 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:22:51.445314    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:51.446212    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.490284    1637 system_pods.go:59] 11 kube-system pods found
	I0610 09:22:51.490295    1637 system_pods.go:61] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.490299    1637 system_pods.go:61] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.490303    1637 system_pods.go:61] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.490306    1637 system_pods.go:61] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.490311    1637 system_pods.go:61] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.490314    1637 system_pods.go:61] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.490317    1637 system_pods.go:61] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.490320    1637 system_pods.go:61] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.490323    1637 system_pods.go:61] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.490325    1637 system_pods.go:61] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.490336    1637 system_pods.go:61] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.490341    1637 system_pods.go:74] duration metric: took 190.503333ms to wait for pod list to return data ...
	I0610 09:22:51.490345    1637 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:22:51.687921    1637 default_sa.go:45] found service account: "default"
	I0610 09:22:51.687931    1637 default_sa.go:55] duration metric: took 197.581625ms for default service account to be created ...
	I0610 09:22:51.687935    1637 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:22:51.890310    1637 system_pods.go:86] 11 kube-system pods found
	I0610 09:22:51.890320    1637 system_pods.go:89] "coredns-5d78c9869d-f2tnn" [ca3d0440-ef50-4214-98e6-d03acf962659] Running
	I0610 09:22:51.890326    1637 system_pods.go:89] "csi-hostpath-attacher-0" [036292ea-9b6d-4270-8dc0-124509d9000f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 09:22:51.890330    1637 system_pods.go:89] "csi-hostpath-resizer-0" [feb75893-38a6-47e9-8eb7-b0dd6b1e6634] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 09:22:51.890333    1637 system_pods.go:89] "csi-hostpathplugin-pjvh6" [150592c1-289e-413a-aa2e-7d0350e39b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 09:22:51.890336    1637 system_pods.go:89] "etcd-addons-098000" [1c6b983c-966e-4df8-bf44-48fc87dabafe] Running
	I0610 09:22:51.890338    1637 system_pods.go:89] "kube-apiserver-addons-098000" [5a9e9998-0cd7-4ff1-801f-4950c1a54c40] Running
	I0610 09:22:51.890341    1637 system_pods.go:89] "kube-controller-manager-addons-098000" [0f92af71-dfec-4a23-aaba-aa57d8acbc2a] Running
	I0610 09:22:51.890344    1637 system_pods.go:89] "kube-ingress-dns-minikube" [ef4b950f-9458-4bb3-8460-5c464e4ed538] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 09:22:51.890349    1637 system_pods.go:89] "kube-proxy-jpnqh" [061edaff-afd1-4550-a96c-2055505ce150] Running
	I0610 09:22:51.890351    1637 system_pods.go:89] "kube-scheduler-addons-098000" [b5293081-e7d2-45a2-9d63-3ca1c6c5e46e] Running
	I0610 09:22:51.890354    1637 system_pods.go:89] "storage-provisioner" [b72b4ee7-fcc1-4456-ae8b-8a39acc6fbe9] Running
	I0610 09:22:51.890357    1637 system_pods.go:126] duration metric: took 202.419584ms to wait for k8s-apps to be running ...
	I0610 09:22:51.890363    1637 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:22:51.890418    1637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:22:51.897401    1637 system_svc.go:56] duration metric: took 7.035125ms WaitForService to wait for kubelet.
	I0610 09:22:51.897410    1637 kubeadm.go:581] duration metric: took 11.8330175s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:22:51.897420    1637 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:22:51.944537    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:51.945311    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.087254    1637 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:22:52.087281    1637 node_conditions.go:123] node cpu capacity is 2
	I0610 09:22:52.087290    1637 node_conditions.go:105] duration metric: took 189.867833ms to run NodePressure ...
	I0610 09:22:52.087295    1637 start.go:228] waiting for startup goroutines ...
	I0610 09:22:52.445279    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:52.445610    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.945799    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:52.946052    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.445389    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.446014    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:53.945473    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:53.946237    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.446325    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:54.946076    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.446618    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.448382    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:55.948114    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:55.951263    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.447181    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.447511    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:56.945501    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:56.946418    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.445349    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.445910    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:57.945410    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:57.946065    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.447469    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.448009    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:58.945353    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:58.946520    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454875    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:22:59.454959    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.946148    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:22:59.947450    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.446206    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:00.447700    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.944434    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:00.945129    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.445646    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.446643    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:01.945710    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:01.947152    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.450730    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.454285    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:02.952960    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:02.955376    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.446358    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.447878    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:03.945294    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:03.946290    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.445145    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.446164    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:04.946364    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:04.946514    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.449729    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.453690    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:05.947873    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:05.950281    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.445562    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.445795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:06.946136    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:06.947509    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.445951    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.446633    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:07.945814    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:07.946157    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.446086    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.446099    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:08.970991    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:08.971383    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.448620    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:09.449087    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.946728    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:09.948250    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446827    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.446978    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:10.945421    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:10.945732    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.444797    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:11.445621    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.948926    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:11.949262    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.452305    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:12.453786    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.948653    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:12.949795    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.445378    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.446558    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:13.946404    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:13.946644    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.446073    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.446331    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:14.946569    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:14.946725    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.445689    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.446865    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:15.947373    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:15.948973    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.445756    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:16.446819    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.944171    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:16.945088    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.448798    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.450089    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:17.952301    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:17.955532    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.446658    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:18.945244    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:18.946363    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.445300    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:19.445962    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944002    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:19.944781    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.446084    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:20.446223    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.952440    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:20.954313    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.445625    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.446916    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:21.945782    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:21.947236    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.445836    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.446162    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:22.945365    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:22.946169    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.449820    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 09:23:23.452877    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.953442    1637 kapi.go:107] duration metric: took 37.512712584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 09:23:23.958122    1637 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-098000 cluster.
	I0610 09:23:23.957179    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:23.961932    1637 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 09:23:23.965925    1637 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 09:23:24.450360    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:24.945980    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.445712    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:25.946008    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.446034    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:26.950257    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.454943    1637 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 09:23:27.956882    1637 kapi.go:107] duration metric: took 46.520505042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 09:28:39.510321    1637 kapi.go:107] duration metric: took 6m0.007516916s to wait for kubernetes.io/minikube-addons=registry ...
	W0610 09:28:39.510625    1637 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0610 09:28:39.531369    1637 kapi.go:107] duration metric: took 6m0.011549375s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0610 09:28:39.531491    1637 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0610 09:28:39.539250    1637 out.go:177] * Enabled addons: volumesnapshots, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, gcp-auth, csi-hostpath-driver
	I0610 09:28:39.545184    1637 addons.go:499] enable addons completed in 6m0.055013834s: enabled=[volumesnapshots inspektor-gadget metrics-server cloud-spanner storage-provisioner default-storageclass ingress-dns gcp-auth csi-hostpath-driver]
	I0610 09:28:39.545227    1637 start.go:233] waiting for cluster config update ...
	I0610 09:28:39.545256    1637 start.go:242] writing updated cluster config ...
	I0610 09:28:39.546371    1637 ssh_runner.go:195] Run: rm -f paused
	I0610 09:28:39.689843    1637 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:28:39.694186    1637 out.go:177] 
	W0610 09:28:39.697254    1637 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:28:39.701213    1637 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:28:39.709228    1637 out.go:177] * Done! kubectl is now configured to use "addons-098000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:47:24 UTC. --
	Jun 10 16:40:47 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:40:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b1c09a27c65e6025a8628d242894806ef3c84888ee3461978597ea82c9a8359/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:40:47 addons-098000 dockerd[933]: time="2023-06-10T16:40:47.656476255Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 10 16:40:51 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:40:51Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626101618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626131368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626141868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:40:51 addons-098000 dockerd[939]: time="2023-06-10T16:40:51.626146368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637249959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637496039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.637517955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:14 addons-098000 dockerd[939]: time="2023-06-10T16:41:14.646608437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:14 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:41:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ecfebe56570bafdd4953fcbed5cc491440e7a38433b98e22741f399c50daace4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:41:18 addons-098000 cri-dockerd[1164]: time="2023-06-10T16:41:18Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961496642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961525017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961721889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:41:18 addons-098000 dockerd[939]: time="2023-06-10T16:41:18.961733472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713886345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713965761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713982010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.713993260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:44:06 addons-098000 dockerd[933]: time="2023-06-10T16:44:06.762500900Z" level=info msg="ignoring event" container=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762943477Z" level=info msg="shim disconnected" id=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f namespace=moby
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762974435Z" level=warning msg="cleaning up after shim disconnected" id=d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f namespace=moby
	Jun 10 16:44:06 addons-098000 dockerd[939]: time="2023-06-10T16:44:06.762978435Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	d8c42421d5531       1499ed4fbd0aa                                                                                                                                3 minutes ago       Exited              minikube-ingress-dns                     9                   8e5b404496c4e
	5db3f4d3cbb1e       nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305                                                                6 minutes ago       Running             task-pv-container                        0                   ecfebe56570ba
	efcb07dff66ed       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                                        6 minutes ago       Running             headlamp                                 0                   4b1c09a27c65e
	23a8cae6443cd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          23 minutes ago      Running             csi-snapshotter                          0                   567c041b8040d
	1a73024f59864       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 24 minutes ago      Running             gcp-auth                                 0                   d8f3043938a40
	3fa8701fda26c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          24 minutes ago      Running             csi-provisioner                          0                   567c041b8040d
	aafd1d61dfe4b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            24 minutes ago      Running             liveness-probe                           0                   567c041b8040d
	2b6767dfbe9d3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           24 minutes ago      Running             hostpath                                 0                   567c041b8040d
	8f02984364568       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                24 minutes ago      Running             node-driver-registrar                    0                   567c041b8040d
	868cfa9fcba69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   24 minutes ago      Running             csi-external-health-monitor-controller   0                   567c041b8040d
	26cfafca2bb0d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              24 minutes ago      Running             csi-resizer                              0                   a78a427783820
	c58c2d26acda8       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             24 minutes ago      Running             csi-attacher                             0                   674b1cd12ae30
	46105da82f67a       ba04bb24b9575                                                                                                                                24 minutes ago      Running             storage-provisioner                      0                   67c7765a9fa6e
	de0a71571f8d0       29921a0845422                                                                                                                                24 minutes ago      Running             kube-proxy                               0                   2bc9129027615
	adfb52103967f       97e04611ad434                                                                                                                                24 minutes ago      Running             coredns                                  0                   d428f978de558
	335475d795fcf       305d7ed1dae28                                                                                                                                25 minutes ago      Running             kube-scheduler                           0                   31fdcf4abeef0
	3dcf946c301ce       2ee705380c3c5                                                                                                                                25 minutes ago      Running             kube-controller-manager                  0                   9fed8ca4bd2f8
	74423d2dab41d       72c9df6be7f1b                                                                                                                                25 minutes ago      Running             kube-apiserver                           0                   11d78b6999216
	2a81bf4413e12       24bc64e911039                                                                                                                                25 minutes ago      Running             etcd                                     0                   a20e51a803c8c
	
	* 
	* ==> coredns [adfb52103967] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46766 - 38334 "HINFO IN 1120296007274907072.5268654669647465865. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004199511s
	[INFO] 10.244.0.10:39208 - 36576 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125s
	[INFO] 10.244.0.10:59425 - 64759 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155334s
	[INFO] 10.244.0.10:33915 - 19077 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037167s
	[INFO] 10.244.0.10:46994 - 65166 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002725s
	[INFO] 10.244.0.10:46598 - 37414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043625s
	[INFO] 10.244.0.10:55204 - 18019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000032792s
	[INFO] 10.244.0.10:60613 - 7185 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000939127s
	[INFO] 10.244.0.10:40293 - 55849 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00103996s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=addons-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_22_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-098000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-098000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-098000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:46:36 +0000   Sat, 10 Jun 2023 16:22:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 43359b33bc0f4b9c9610dd4ec5308f62
	  System UUID:                43359b33bc0f4b9c9610dd4ec5308f62
	  Boot ID:                    eb81fa5c-fe8f-47ab-b5e5-9f5fe2e987b0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  gcp-auth                    gcp-auth-58478865f7-jkcxn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  headlamp                    headlamp-6b5756787-6wqrt                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 coredns-5d78c9869d-f2tnn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     24m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 csi-hostpathplugin-pjvh6                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 etcd-addons-098000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kube-apiserver-addons-098000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-addons-098000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-jpnqh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-addons-098000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24m   kube-proxy       
	  Normal  Starting                 24m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  24m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24m   kubelet          Node addons-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m   kubelet          Node addons-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m   kubelet          Node addons-098000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m   kubelet          Node addons-098000 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node addons-098000 event: Registered Node addons-098000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.696014] EINJ: EINJ table not found.
	[  +0.658239] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043798] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000807] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.876165] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.071972] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.924516] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +2.288987] systemd-fstab-generator[866]: Ignoring "noauto" for root device
	[  +0.165983] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +0.077870] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.072149] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +1.146266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.099605] systemd-fstab-generator[1083]: Ignoring "noauto" for root device
	[  +0.082038] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +0.080513] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.078963] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.086582] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	[  +3.056689] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.651414] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[ +14.757696] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.157496] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.873848] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 16:23] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 16:41] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [2a81bf4413e1] <==
	* {"level":"info","ts":"2023-06-10T16:22:22.849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-10T16:22:22.857Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-098000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:22:22.859Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:22:22.860Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:22:22.866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:32:22.450Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":974,"took":"2.490131ms","hash":4035340276}
	{"level":"info","ts":"2023-06-10T16:32:22.453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4035340276,"revision":974,"compact-revision":-1}
	{"level":"info","ts":"2023-06-10T16:37:22.461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1290}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1290,"took":"1.421443ms","hash":2326989487}
	{"level":"info","ts":"2023-06-10T16:37:22.463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2326989487,"revision":1290,"compact-revision":974}
	{"level":"info","ts":"2023-06-10T16:40:51.234Z","caller":"traceutil/trace.go:171","msg":"trace[700287871] transaction","detail":"{read_only:false; response_revision:1833; number_of_response:1; }","duration":"135.849795ms","start":"2023-06-10T16:40:51.098Z","end":"2023-06-10T16:40:51.234Z","steps":["trace[700287871] 'process raft request'  (duration: 135.764879ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T16:42:22.466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1593}
	{"level":"info","ts":"2023-06-10T16:42:22.468Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1593,"took":"1.224024ms","hash":3268633527}
	{"level":"info","ts":"2023-06-10T16:42:22.468Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3268633527,"revision":1593,"compact-revision":1290}
	{"level":"info","ts":"2023-06-10T16:47:22.473Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1959}
	{"level":"info","ts":"2023-06-10T16:47:22.475Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1959,"took":"1.126903ms","hash":690256152}
	{"level":"info","ts":"2023-06-10T16:47:22.475Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":690256152,"revision":1959,"compact-revision":1593}
	
	* 
	* ==> gcp-auth [1a73024f5986] <==
	* 2023/06/10 16:23:23 GCP Auth Webhook started!
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:40:46 Ready to marshal response ...
	2023/06/10 16:40:46 Ready to write response ...
	2023/06/10 16:41:14 Ready to marshal response ...
	2023/06/10 16:41:14 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  16:47:24 up 25 min,  0 users,  load average: 0.68, 0.51, 0.41
	Linux addons-098000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [74423d2dab41] <==
	* I0610 16:22:23.642356       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:22:23.657792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:22:24.401560       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:22:24.563279       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:22:24.568497       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:22:24.568654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:22:24.720978       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:22:24.731371       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:22:24.801810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:22:24.805350       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0610 16:22:24.806303       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:22:24.807740       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:22:25.583035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:22:26.059225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:22:26.063878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:22:26.068513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:22:39.217505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:22:39.917252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:22:40.754199       1 alloc.go:330] "allocated clusterIPs" service="default/cloud-spanner-emulator" clusterIPs=map[IPv4:10.99.222.169]
	I0610 16:22:41.357691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.106.85.14]
	I0610 16:22:41.362266       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0610 16:22:41.419673       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.111.90.60]
	I0610 16:22:46.394399       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.102.46.8]
	I0610 16:22:46.411449       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0610 16:40:46.897391       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.104.126.229]
	
	* 
	* ==> kube-controller-manager [3dcf946c301c] <==
	* I0610 16:23:11.258467       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.330708       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.333357       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.335850       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:11.335887       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.336870       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:11.345682       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.251101       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.256393       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.263577       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0610 16:23:12.263691       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.265671       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:12.266556       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:41.027747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:41.050836       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0610 16:23:42.013412       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:23:42.047992       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0610 16:40:46.909946       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-6b5756787 to 1"
	I0610 16:40:46.923328       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-6b5756787-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0610 16:40:46.939753       1 replica_set.go:544] sync "headlamp/headlamp-6b5756787" failed with pods "headlamp-6b5756787-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0610 16:40:46.962022       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-6b5756787-6wqrt"
	I0610 16:40:58.095565       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:40:58.095582       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:41:08.457597       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0610 16:41:13.738721       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	
	* 
	* ==> kube-proxy [de0a71571f8d] <==
	* I0610 16:22:40.477801       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0610 16:22:40.477968       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0610 16:22:40.477988       1 server_others.go:551] "Using iptables proxy"
	I0610 16:22:40.508315       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:22:40.508325       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:22:40.508357       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:22:40.508608       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:22:40.508614       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:22:40.509861       1 config.go:188] "Starting service config controller"
	I0610 16:22:40.509869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:22:40.509881       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:22:40.509882       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:22:40.511342       1 config.go:315] "Starting node config controller"
	I0610 16:22:40.511347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:22:40.609918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 16:22:40.609943       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:22:40.611397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [335475d795fc] <==
	* W0610 16:22:23.606482       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:22:23.606891       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:22:23.606959       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:22:23.606982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:22:23.607008       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607026       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607067       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:23.607087       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:23.607166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:22:23.607199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:22:23.607247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:23.607268       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.463642       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:22:24.463731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:22:24.485768       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:22:24.485809       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:22:24.588161       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:22:24.588197       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 16:22:24.600064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:22:24.600158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:22:24.604631       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:22:24.604651       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:22:24.616055       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:22:24.616131       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:22:27.098734       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:22:07 UTC, ends at Sat 2023-06-10 16:47:25 UTC. --
	Jun 10 16:45:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:45:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:45:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:45:28 addons-098000 kubelet[2091]: I0610 16:45:28.681236    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:45:28 addons-098000 kubelet[2091]: E0610 16:45:28.681397    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:45:43 addons-098000 kubelet[2091]: I0610 16:45:43.681427    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:45:43 addons-098000 kubelet[2091]: E0610 16:45:43.681552    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:45:58 addons-098000 kubelet[2091]: I0610 16:45:58.682689    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:45:58 addons-098000 kubelet[2091]: E0610 16:45:58.683935    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:46:12 addons-098000 kubelet[2091]: I0610 16:46:12.680925    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:46:12 addons-098000 kubelet[2091]: E0610 16:46:12.681106    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:46:23 addons-098000 kubelet[2091]: I0610 16:46:23.681224    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:46:23 addons-098000 kubelet[2091]: E0610 16:46:23.681452    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:46:25 addons-098000 kubelet[2091]: E0610 16:46:25.685079    2091 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:46:25 addons-098000 kubelet[2091]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:46:25 addons-098000 kubelet[2091]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:46:25 addons-098000 kubelet[2091]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:46:35 addons-098000 kubelet[2091]: I0610 16:46:35.682416    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:46:35 addons-098000 kubelet[2091]: E0610 16:46:35.682626    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:46:46 addons-098000 kubelet[2091]: I0610 16:46:46.681624    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:46:46 addons-098000 kubelet[2091]: E0610 16:46:46.682260    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:46:59 addons-098000 kubelet[2091]: I0610 16:46:59.680475    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:46:59 addons-098000 kubelet[2091]: E0610 16:46:59.680906    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	Jun 10 16:47:14 addons-098000 kubelet[2091]: I0610 16:47:14.680334    2091 scope.go:115] "RemoveContainer" containerID="d8c42421d553165930dd6d04eff04f6ff5706b6ddad1a746d2592f864f717b0f"
	Jun 10 16:47:14 addons-098000 kubelet[2091]: E0610 16:47:14.680480    2091 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ef4b950f-9458-4bb3-8460-5c464e4ed538)\"" pod="kube-system/kube-ingress-dns-minikube" podUID=ef4b950f-9458-4bb3-8460-5c464e4ed538
	
	* 
	* ==> storage-provisioner [46105da82f67] <==
	* I0610 16:22:41.552997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:22:41.564566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:22:41.564604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:22:41.567070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:22:41.567242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b8b8b2f-e69f-4abd-8693-9c0a331852aa", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-098000_976d826c-217e-4d0d-87e7-e825dd783783 became leader
	I0610 16:22:41.567336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	I0610 16:22:41.668274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-098000_976d826c-217e-4d0d-87e7-e825dd783783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-098000 -n addons-098000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (387.31s)

                                                
                                    
x
+
TestCertOptions (10.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.765202875s)

                                                
                                                
-- stdout --
	* [cert-options-834000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-834000 in cluster cert-options-834000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-834000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-834000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.750167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-834000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-834000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-834000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-834000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.10275ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-834000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-06-10 10:14:12.233258 -0700 PDT m=+3183.300836709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-834000 -n cert-options-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-834000 -n cert-options-834000: exit status 7 (29.440583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-834000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-834000
--- FAIL: TestCertOptions (10.04s)

                                                
                                    
x
+
TestCertExpiration (195.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.844314917s)

                                                
                                                
-- stdout --
	* [cert-expiration-841000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-841000 in cluster cert-expiration-841000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-841000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-841000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.225159416s)

                                                
                                                
-- stdout --
	* [cert-expiration-841000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-841000 in cluster cert-expiration-841000
	* Restarting existing qemu2 VM for "cert-expiration-841000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-841000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-841000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-841000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-841000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-841000 in cluster cert-expiration-841000
	* Restarting existing qemu2 VM for "cert-expiration-841000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-841000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-841000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-06-10 10:17:12.405317 -0700 PDT m=+3363.477798084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-841000 -n cert-expiration-841000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-841000 -n cert-expiration-841000: exit status 7 (69.751458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-841000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-841000
--- FAIL: TestCertExpiration (195.24s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-821000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-821000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.943524583s)

                                                
                                                
-- stdout --
	* [docker-flags-821000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-821000 in cluster docker-flags-821000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:13:52.147277    4237 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:13:52.147440    4237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:52.147443    4237 out.go:309] Setting ErrFile to fd 2...
	I0610 10:13:52.147446    4237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:52.147512    4237 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:13:52.148578    4237 out.go:303] Setting JSON to false
	I0610 10:13:52.163766    4237 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4403,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:13:52.163827    4237 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:13:52.167870    4237 out.go:177] * [docker-flags-821000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:13:52.176040    4237 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:13:52.179945    4237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:13:52.176056    4237 notify.go:220] Checking for updates...
	I0610 10:13:52.185975    4237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:13:52.188896    4237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:13:52.191967    4237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:13:52.195002    4237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:13:52.198234    4237 config.go:182] Loaded profile config "force-systemd-flag-177000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:13:52.198288    4237 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:13:52.202930    4237 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:13:52.209955    4237 start.go:297] selected driver: qemu2
	I0610 10:13:52.209960    4237 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:13:52.209970    4237 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:13:52.211940    4237 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:13:52.214978    4237 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:13:52.218044    4237 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0610 10:13:52.218065    4237 cni.go:84] Creating CNI manager for ""
	I0610 10:13:52.218078    4237 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:13:52.218083    4237 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:13:52.218089    4237 start_flags.go:319] config:
	{Name:docker-flags-821000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-821000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0610 10:13:52.218180    4237 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:13:52.225969    4237 out.go:177] * Starting control plane node docker-flags-821000 in cluster docker-flags-821000
	I0610 10:13:52.229967    4237 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:13:52.229989    4237 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:13:52.230000    4237 cache.go:57] Caching tarball of preloaded images
	I0610 10:13:52.230062    4237 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:13:52.230080    4237 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:13:52.230164    4237 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/docker-flags-821000/config.json ...
	I0610 10:13:52.230175    4237 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/docker-flags-821000/config.json: {Name:mk05c545952eaa96edb245206aef05b5f2cca31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:13:52.230366    4237 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:13:52.230377    4237 start.go:364] acquiring machines lock for docker-flags-821000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:52.230407    4237 start.go:368] acquired machines lock for "docker-flags-821000" in 24.583µs
	I0610 10:13:52.230419    4237 start.go:93] Provisioning new machine with config: &{Name:docker-flags-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-821000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:52.230447    4237 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:52.234965    4237 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:52.251315    4237 start.go:159] libmachine.API.Create for "docker-flags-821000" (driver="qemu2")
	I0610 10:13:52.251341    4237 client.go:168] LocalClient.Create starting
	I0610 10:13:52.251402    4237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:52.251420    4237 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:52.251433    4237 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:52.251482    4237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:52.251501    4237 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:52.251512    4237 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:52.251870    4237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:52.402804    4237 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:52.610041    4237 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:52.610051    4237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:52.610230    4237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:13:52.619510    4237 main.go:141] libmachine: STDOUT: 
	I0610 10:13:52.619527    4237 main.go:141] libmachine: STDERR: 
	I0610 10:13:52.619597    4237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2 +20000M
	I0610 10:13:52.626793    4237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:52.626815    4237 main.go:141] libmachine: STDERR: 
	I0610 10:13:52.626831    4237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:13:52.626847    4237 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:52.626881    4237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d8:8f:a3:9a:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:13:52.628454    4237 main.go:141] libmachine: STDOUT: 
	I0610 10:13:52.628465    4237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:52.628482    4237 client.go:171] LocalClient.Create took 377.141584ms
	I0610 10:13:54.630657    4237 start.go:128] duration metric: createHost completed in 2.400211875s
	I0610 10:13:54.630744    4237 start.go:83] releasing machines lock for "docker-flags-821000", held for 2.400362083s
	W0610 10:13:54.630812    4237 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:54.640932    4237 out.go:177] * Deleting "docker-flags-821000" in qemu2 ...
	W0610 10:13:54.656860    4237 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:54.656889    4237 start.go:702] Will try again in 5 seconds ...
	I0610 10:13:59.658972    4237 start.go:364] acquiring machines lock for docker-flags-821000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:59.733418    4237 start.go:368] acquired machines lock for "docker-flags-821000" in 74.308916ms
	I0610 10:13:59.733604    4237 start.go:93] Provisioning new machine with config: &{Name:docker-flags-821000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-821000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:59.733995    4237 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:59.739351    4237 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:59.785467    4237 start.go:159] libmachine.API.Create for "docker-flags-821000" (driver="qemu2")
	I0610 10:13:59.785572    4237 client.go:168] LocalClient.Create starting
	I0610 10:13:59.785690    4237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:59.785732    4237 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:59.785749    4237 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:59.785822    4237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:59.785849    4237 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:59.785864    4237 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:59.786380    4237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:59.928141    4237 main.go:141] libmachine: Creating SSH key...
	I0610 10:14:00.003443    4237 main.go:141] libmachine: Creating Disk image...
	I0610 10:14:00.003450    4237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:14:00.003584    4237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:14:00.012049    4237 main.go:141] libmachine: STDOUT: 
	I0610 10:14:00.012061    4237 main.go:141] libmachine: STDERR: 
	I0610 10:14:00.012103    4237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2 +20000M
	I0610 10:14:00.019159    4237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:14:00.019172    4237 main.go:141] libmachine: STDERR: 
	I0610 10:14:00.019183    4237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:14:00.019190    4237 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:14:00.019234    4237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:72:f9:ff:66:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/docker-flags-821000/disk.qcow2
	I0610 10:14:00.020760    4237 main.go:141] libmachine: STDOUT: 
	I0610 10:14:00.020772    4237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:00.020790    4237 client.go:171] LocalClient.Create took 235.216083ms
	I0610 10:14:02.022916    4237 start.go:128] duration metric: createHost completed in 2.288930833s
	I0610 10:14:02.023038    4237 start.go:83] releasing machines lock for "docker-flags-821000", held for 2.289560792s
	W0610 10:14:02.023444    4237 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:02.034021    4237 out.go:177] 
	W0610 10:14:02.038397    4237 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:02.038421    4237 out.go:239] * 
	* 
	W0610 10:14:02.041045    4237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:14:02.050012    4237 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-821000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-821000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-821000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.267ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-821000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-821000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-821000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-821000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-821000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-821000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (42.468291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-821000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-821000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-821000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-821000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-06-10 10:14:02.188725 -0700 PDT m=+3173.255583418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-821000 -n docker-flags-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-821000 -n docker-flags-821000: exit status 7 (28.872166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-821000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-821000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-821000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (11.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-177000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-177000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.935860542s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-177000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-177000 in cluster force-systemd-flag-177000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:13:46.213518    4216 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:13:46.213660    4216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:46.213664    4216 out.go:309] Setting ErrFile to fd 2...
	I0610 10:13:46.213666    4216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:46.213744    4216 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:13:46.214887    4216 out.go:303] Setting JSON to false
	I0610 10:13:46.230520    4216 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4397,"bootTime":1686412829,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:13:46.230590    4216 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:13:46.238830    4216 out.go:177] * [force-systemd-flag-177000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:13:46.241903    4216 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:13:46.246834    4216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:13:46.241951    4216 notify.go:220] Checking for updates...
	I0610 10:13:46.252908    4216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:13:46.255783    4216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:13:46.258837    4216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:13:46.261796    4216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:13:46.265012    4216 config.go:182] Loaded profile config "force-systemd-env-535000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:13:46.265063    4216 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:13:46.268783    4216 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:13:46.275780    4216 start.go:297] selected driver: qemu2
	I0610 10:13:46.275785    4216 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:13:46.275794    4216 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:13:46.277751    4216 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:13:46.280754    4216 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:13:46.283915    4216 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:13:46.283933    4216 cni.go:84] Creating CNI manager for ""
	I0610 10:13:46.283941    4216 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:13:46.283945    4216 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:13:46.283956    4216 start_flags.go:319] config:
	{Name:force-systemd-flag-177000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-177000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:13:46.284047    4216 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:13:46.291802    4216 out.go:177] * Starting control plane node force-systemd-flag-177000 in cluster force-systemd-flag-177000
	I0610 10:13:46.295671    4216 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:13:46.295693    4216 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:13:46.295706    4216 cache.go:57] Caching tarball of preloaded images
	I0610 10:13:46.295765    4216 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:13:46.295771    4216 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:13:46.295837    4216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/force-systemd-flag-177000/config.json ...
	I0610 10:13:46.295850    4216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/force-systemd-flag-177000/config.json: {Name:mka49e88d198adb077cc5223aeb4871ea300f5ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:13:46.296045    4216 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:13:46.296057    4216 start.go:364] acquiring machines lock for force-systemd-flag-177000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:46.296087    4216 start.go:368] acquired machines lock for "force-systemd-flag-177000" in 24.334µs
	I0610 10:13:46.296099    4216 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-177000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:46.296129    4216 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:46.304641    4216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:46.321622    4216 start.go:159] libmachine.API.Create for "force-systemd-flag-177000" (driver="qemu2")
	I0610 10:13:46.321653    4216 client.go:168] LocalClient.Create starting
	I0610 10:13:46.321707    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:46.321732    4216 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:46.321741    4216 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:46.321785    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:46.321800    4216 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:46.321806    4216 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:46.322130    4216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:46.498221    4216 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:46.544207    4216 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:46.544213    4216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:46.544358    4216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:46.552816    4216 main.go:141] libmachine: STDOUT: 
	I0610 10:13:46.552845    4216 main.go:141] libmachine: STDERR: 
	I0610 10:13:46.552892    4216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2 +20000M
	I0610 10:13:46.559918    4216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:46.559945    4216 main.go:141] libmachine: STDERR: 
	I0610 10:13:46.559964    4216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:46.559975    4216 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:46.560010    4216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a5:8e:a9:89:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:46.561517    4216 main.go:141] libmachine: STDOUT: 
	I0610 10:13:46.561533    4216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:46.561552    4216 client.go:171] LocalClient.Create took 239.896584ms
	I0610 10:13:48.563711    4216 start.go:128] duration metric: createHost completed in 2.267587667s
	I0610 10:13:48.563771    4216 start.go:83] releasing machines lock for "force-systemd-flag-177000", held for 2.267708666s
	W0610 10:13:48.563821    4216 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:48.576700    4216 out.go:177] * Deleting "force-systemd-flag-177000" in qemu2 ...
	W0610 10:13:48.595926    4216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:48.595955    4216 start.go:702] Will try again in 5 seconds ...
	I0610 10:13:53.598190    4216 start.go:364] acquiring machines lock for force-systemd-flag-177000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:54.630884    4216 start.go:368] acquired machines lock for "force-systemd-flag-177000" in 1.032600708s
	I0610 10:13:54.631050    4216 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-177000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:54.631364    4216 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:54.637051    4216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:54.682902    4216 start.go:159] libmachine.API.Create for "force-systemd-flag-177000" (driver="qemu2")
	I0610 10:13:54.682946    4216 client.go:168] LocalClient.Create starting
	I0610 10:13:54.683080    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:54.683120    4216 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:54.683146    4216 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:54.683225    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:54.683252    4216 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:54.683266    4216 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:54.683861    4216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:54.863550    4216 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:55.063533    4216 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:55.063541    4216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:55.063722    4216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:55.072852    4216 main.go:141] libmachine: STDOUT: 
	I0610 10:13:55.072870    4216 main.go:141] libmachine: STDERR: 
	I0610 10:13:55.072936    4216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2 +20000M
	I0610 10:13:55.080145    4216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:55.080158    4216 main.go:141] libmachine: STDERR: 
	I0610 10:13:55.080171    4216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:55.080178    4216 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:55.080210    4216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fa:f2:01:e0:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-flag-177000/disk.qcow2
	I0610 10:13:55.081763    4216 main.go:141] libmachine: STDOUT: 
	I0610 10:13:55.081784    4216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:55.081795    4216 client.go:171] LocalClient.Create took 398.848417ms
	I0610 10:13:57.083942    4216 start.go:128] duration metric: createHost completed in 2.452587708s
	I0610 10:13:57.084134    4216 start.go:83] releasing machines lock for "force-systemd-flag-177000", held for 2.453192209s
	W0610 10:13:57.084535    4216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:57.094146    4216 out.go:177] 
	W0610 10:13:57.099000    4216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:13:57.099057    4216 out.go:239] * 
	* 
	W0610 10:13:57.102062    4216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:13:57.109957    4216 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-177000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-177000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-177000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (80.730625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-177000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-177000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-06-10 10:13:57.206114 -0700 PDT m=+3168.272897751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-177000 -n force-systemd-flag-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-177000 -n force-systemd-flag-177000: exit status 7 (34.137125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-177000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-177000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-177000
--- FAIL: TestForceSystemdFlag (11.15s)

                                                
                                    
x
+
TestForceSystemdEnv (10.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-535000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-535000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.833268333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-535000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-535000 in cluster force-systemd-env-535000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:13:42.106899    4179 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:13:42.107027    4179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:42.107030    4179 out.go:309] Setting ErrFile to fd 2...
	I0610 10:13:42.107032    4179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:13:42.107107    4179 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:13:42.108213    4179 out.go:303] Setting JSON to false
	I0610 10:13:42.123685    4179 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4393,"bootTime":1686412829,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:13:42.123752    4179 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:13:42.128045    4179 out.go:177] * [force-systemd-env-535000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:13:42.138978    4179 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:13:42.134977    4179 notify.go:220] Checking for updates...
	I0610 10:13:42.146987    4179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:13:42.157058    4179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:13:42.164979    4179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:13:42.171972    4179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:13:42.178954    4179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0610 10:13:42.183114    4179 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:13:42.187016    4179 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:13:42.193987    4179 start.go:297] selected driver: qemu2
	I0610 10:13:42.193991    4179 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:13:42.193998    4179 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:13:42.195905    4179 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:13:42.200048    4179 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:13:42.204032    4179 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:13:42.204046    4179 cni.go:84] Creating CNI manager for ""
	I0610 10:13:42.204054    4179 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:13:42.204061    4179 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:13:42.204065    4179 start_flags.go:319] config:
	{Name:force-systemd-env-535000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-535000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:13:42.204151    4179 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:13:42.209960    4179 out.go:177] * Starting control plane node force-systemd-env-535000 in cluster force-systemd-env-535000
	I0610 10:13:42.213981    4179 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:13:42.214007    4179 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:13:42.214024    4179 cache.go:57] Caching tarball of preloaded images
	I0610 10:13:42.214070    4179 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:13:42.214074    4179 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:13:42.214265    4179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/force-systemd-env-535000/config.json ...
	I0610 10:13:42.214281    4179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/force-systemd-env-535000/config.json: {Name:mkc24356aba140250896e7a80387442c10fbbe90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:13:42.214444    4179 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:13:42.214455    4179 start.go:364] acquiring machines lock for force-systemd-env-535000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:42.214481    4179 start.go:368] acquired machines lock for "force-systemd-env-535000" in 21.708µs
	I0610 10:13:42.214493    4179 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-535000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:42.214530    4179 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:42.222954    4179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:42.238029    4179 start.go:159] libmachine.API.Create for "force-systemd-env-535000" (driver="qemu2")
	I0610 10:13:42.238066    4179 client.go:168] LocalClient.Create starting
	I0610 10:13:42.238140    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:42.238166    4179 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:42.238175    4179 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:42.238222    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:42.238237    4179 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:42.238246    4179 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:42.238522    4179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:42.403239    4179 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:42.466913    4179 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:42.466929    4179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:42.467113    4179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:42.476460    4179 main.go:141] libmachine: STDOUT: 
	I0610 10:13:42.476479    4179 main.go:141] libmachine: STDERR: 
	I0610 10:13:42.476544    4179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2 +20000M
	I0610 10:13:42.484577    4179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:42.484592    4179 main.go:141] libmachine: STDERR: 
	I0610 10:13:42.484617    4179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:42.484623    4179 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:42.484655    4179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:0e:22:56:e2:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:42.486422    4179 main.go:141] libmachine: STDOUT: 
	I0610 10:13:42.486437    4179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:42.486452    4179 client.go:171] LocalClient.Create took 248.385542ms
	I0610 10:13:44.488606    4179 start.go:128] duration metric: createHost completed in 2.274081583s
	I0610 10:13:44.488680    4179 start.go:83] releasing machines lock for "force-systemd-env-535000", held for 2.274223125s
	W0610 10:13:44.488765    4179 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:44.495933    4179 out.go:177] * Deleting "force-systemd-env-535000" in qemu2 ...
	W0610 10:13:44.513925    4179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:44.513952    4179 start.go:702] Will try again in 5 seconds ...
	I0610 10:13:49.516126    4179 start.go:364] acquiring machines lock for force-systemd-env-535000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:49.516596    4179 start.go:368] acquired machines lock for "force-systemd-env-535000" in 332.875µs
	I0610 10:13:49.516727    4179 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-535000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-535000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:49.516998    4179 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:49.525594    4179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 10:13:49.571815    4179 start.go:159] libmachine.API.Create for "force-systemd-env-535000" (driver="qemu2")
	I0610 10:13:49.571856    4179 client.go:168] LocalClient.Create starting
	I0610 10:13:49.571994    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:49.572045    4179 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:49.572072    4179 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:49.572164    4179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:49.572194    4179 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:49.572206    4179 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:49.572873    4179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:49.739192    4179 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:49.854659    4179 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:49.854665    4179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:49.854823    4179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:49.863218    4179 main.go:141] libmachine: STDOUT: 
	I0610 10:13:49.863232    4179 main.go:141] libmachine: STDERR: 
	I0610 10:13:49.863292    4179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2 +20000M
	I0610 10:13:49.870388    4179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:49.870416    4179 main.go:141] libmachine: STDERR: 
	I0610 10:13:49.870431    4179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:49.870435    4179 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:49.870469    4179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d6:a4:c7:1f:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/force-systemd-env-535000/disk.qcow2
	I0610 10:13:49.871974    4179 main.go:141] libmachine: STDOUT: 
	I0610 10:13:49.871988    4179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:49.872002    4179 client.go:171] LocalClient.Create took 300.146791ms
	I0610 10:13:51.874197    4179 start.go:128] duration metric: createHost completed in 2.357206833s
	I0610 10:13:51.874255    4179 start.go:83] releasing machines lock for "force-systemd-env-535000", held for 2.357670625s
	W0610 10:13:51.874695    4179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:51.882227    4179 out.go:177] 
	W0610 10:13:51.887283    4179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:13:51.887305    4179 out.go:239] * 
	* 
	W0610 10:13:51.889895    4179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:13:51.899147    4179 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-535000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-535000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-535000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.787458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-535000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-535000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-06-10 10:13:51.991946 -0700 PDT m=+3163.058652876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-535000 -n force-systemd-env-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-535000 -n force-systemd-env-535000: exit status 7 (32.555167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-535000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-535000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-535000
--- FAIL: TestForceSystemdEnv (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-656000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-656000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-xgv29" [7120a803-a8d7-4a3d-8acf-960033fcee5d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-xgv29" [7120a803-a8d7-4a3d-8acf-960033fcee5d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.013385083s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.105.4:31569
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1679: failed to fetch http://192.168.105.4:31569: Get "http://192.168.105.4:31569": dial tcp 192.168.105.4:31569: connect: connection refused
functional_test.go:1596: service test failed - dumping debug information
functional_test.go:1597: -----------------------service failure post-mortem--------------------------------
functional_test.go:1600: (dbg) Run:  kubectl --context functional-656000 describe po hello-node-connect
functional_test.go:1604: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-xgv29
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656000/192.168.105.4
Start Time:       Sat, 10 Jun 2023 09:52:05 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://f8e0dcec707c19047f5e79ad9033f5666208c4d9a13c134bbfe45b2b0323d4c7
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 10 Jun 2023 09:52:29 -0700
Finished:     Sat, 10 Jun 2023 09:52:29 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 10 Jun 2023 09:52:11 -0700
Finished:     Sat, 10 Jun 2023 09:52:11 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mpzlk (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-mpzlk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  32s               default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-xgv29 to functional-656000
Normal   Pulling    31s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.526805714s (4.526814339s including waiting)
Normal   Created    8s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    8s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     8s (x2 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x3 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-xgv29_default(7120a803-a8d7-4a3d-8acf-960033fcee5d)

                                                
                                                
functional_test.go:1606: (dbg) Run:  kubectl --context functional-656000 logs -l app=hello-node-connect
functional_test.go:1610: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1612: (dbg) Run:  kubectl --context functional-656000 describe svc hello-node-connect
functional_test.go:1616: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.196.254
IPs:                      10.99.196.254
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31569/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-656000 -n functional-656000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh -- ls                                                                                          | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh cat                                                                                            | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | /mount-9p/test-1686415946500997000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh stat                                                                                           | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh stat                                                                                           | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh sudo                                                                                           | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3674542565/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh -- ls                                                                                          | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh sudo                                                                                           | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-656000 ssh findmnt                                                                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-656000                                                                                                 | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-656000 --dry-run                                                                                       | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|           | -p functional-656000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:52:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:52:34.800683    2993 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:52:34.800817    2993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.800820    2993 out.go:309] Setting ErrFile to fd 2...
	I0610 09:52:34.800822    2993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.800894    2993 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:52:34.801878    2993 out.go:303] Setting JSON to false
	I0610 09:52:34.817409    2993 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3125,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:52:34.817472    2993 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:52:34.820309    2993 out.go:177] * [functional-656000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:52:34.827293    2993 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:52:34.830255    2993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:52:34.827336    2993 notify.go:220] Checking for updates...
	I0610 09:52:34.836281    2993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:52:34.837623    2993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:52:34.840315    2993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:52:34.843297    2993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:52:34.853964    2993 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:52:34.854197    2993 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:52:34.858254    2993 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 09:52:34.865299    2993 start.go:297] selected driver: qemu2
	I0610 09:52:34.865304    2993 start.go:875] validating driver "qemu2" against &{Name:functional-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-656000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:52:34.865376    2993 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:52:34.867214    2993 cni.go:84] Creating CNI manager for ""
	I0610 09:52:34.867229    2993 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:52:34.867236    2993 start_flags.go:319] config:
	{Name:functional-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-656000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:52:34.874278    2993 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:49:56 UTC, ends at Sat 2023-06-10 16:52:38 UTC. --
	Jun 10 16:52:29 functional-656000 dockerd[6711]: time="2023-06-10T16:52:29.872148035Z" level=warning msg="cleaning up after shim disconnected" id=192b500229dc49d74c9c93b9e3a0d79f60e08c1db07c3370f6b99c17e4b5580f namespace=moby
	Jun 10 16:52:29 functional-656000 dockerd[6711]: time="2023-06-10T16:52:29.872164576Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:52:31 functional-656000 dockerd[6705]: time="2023-06-10T16:52:31.161100035Z" level=info msg="ignoring event" container=25aad7d4957cbff5520e191f64ca3131d73de89cca82d3b7d869164056e91239 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:52:31 functional-656000 dockerd[6711]: time="2023-06-10T16:52:31.161467861Z" level=info msg="shim disconnected" id=25aad7d4957cbff5520e191f64ca3131d73de89cca82d3b7d869164056e91239 namespace=moby
	Jun 10 16:52:31 functional-656000 dockerd[6711]: time="2023-06-10T16:52:31.161624066Z" level=warning msg="cleaning up after shim disconnected" id=25aad7d4957cbff5520e191f64ca3131d73de89cca82d3b7d869164056e91239 namespace=moby
	Jun 10 16:52:31 functional-656000 dockerd[6711]: time="2023-06-10T16:52:31.161639816Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.249164908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.249378821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.249411904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.249434278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 dockerd[6705]: time="2023-06-10T16:52:35.311954812Z" level=info msg="ignoring event" container=daa53364c98985622e0fd60f414d04e510a932b6562e4f267ca35cfb9c410a56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.312167017Z" level=info msg="shim disconnected" id=daa53364c98985622e0fd60f414d04e510a932b6562e4f267ca35cfb9c410a56 namespace=moby
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.312194975Z" level=warning msg="cleaning up after shim disconnected" id=daa53364c98985622e0fd60f414d04e510a932b6562e4f267ca35cfb9c410a56 namespace=moby
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.312198558Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.821723922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.821757464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.821766672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.821772963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.822015209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.822033001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.822042709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:52:35 functional-656000 dockerd[6711]: time="2023-06-10T16:52:35.822046959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:52:35 functional-656000 cri-dockerd[6993]: time="2023-06-10T16:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/79510ce7cf5dde240ecca85b4dc5e28b9569c74e9bf2941a8ba980a3ea718926/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:52:35 functional-656000 cri-dockerd[6993]: time="2023-06-10T16:52:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b1aa3f8d38c7362c3ae74e34c323a3f8064d0685ce838ccb8bc3453573275b8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 16:52:36 functional-656000 dockerd[6705]: time="2023-06-10T16:52:36.189654892Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	daa53364c9898       72565bf5bbedf                                                                                         3 seconds ago        Exited              echoserver-arm            2                   c31fdeeec3dea
	192b500229dc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   25aad7d4957cb
	f8e0dcec707c1       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   c967d8f92d46f
	64cba4b300053       nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305                         25 seconds ago       Running             myfrontend                0                   3e90fc5a4ba99
	9f2354a319edc       nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90                         40 seconds ago       Running             nginx                     0                   9116fba1fff69
	1785432d7e78d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   14dd3efce7ba5
	b152293839c26       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   50720bed380d6
	afc09444f024a       29921a0845422                                                                                         About a minute ago   Running             kube-proxy                2                   e7bd867a9a0cd
	1edb0328d3fa2       2ee705380c3c5                                                                                         About a minute ago   Running             kube-controller-manager   2                   36ae50d74d259
	b5599e7dd85a2       72c9df6be7f1b                                                                                         About a minute ago   Running             kube-apiserver            0                   00e5b44229515
	9fae0abdb39a2       305d7ed1dae28                                                                                         About a minute ago   Running             kube-scheduler            2                   02a49a9bf589d
	8daf61a06ede8       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   da9037187476e
	cde9722371b2d       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   a77de1f48d879
	37477d07a5187       29921a0845422                                                                                         About a minute ago   Exited              kube-proxy                1                   d768190af9687
	f0915e56817f3       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   ce2e00e279407
	85408977d450f       2ee705380c3c5                                                                                         About a minute ago   Exited              kube-controller-manager   1                   b168cda87673e
	57a7cc9a5af77       24bc64e911039                                                                                         About a minute ago   Exited              etcd                      1                   db9986f47500d
	25267469a4405       305d7ed1dae28                                                                                         About a minute ago   Exited              kube-scheduler            1                   b7e31ef4fbabe
	
	* 
	* ==> coredns [b152293839c2] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56321 - 43922 "HINFO IN 4030653374030815143.5405088511711524711. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004829743s
	[INFO] 10.244.0.1:29006 - 9751 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000101702s
	[INFO] 10.244.0.1:13260 - 60540 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000068662s
	[INFO] 10.244.0.1:22802 - 59743 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00006387s
	[INFO] 10.244.0.1:23271 - 35139 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001671219s
	[INFO] 10.244.0.1:53240 - 51345 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000117367s
	[INFO] 10.244.0.1:15668 - 39307 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000214361s
	
	* 
	* ==> coredns [cde9722371b2] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45213 - 40886 "HINFO IN 800962726749955243.5337616989434516966. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004146238s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-656000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-656000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=functional-656000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_50_14_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:50:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-656000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:52:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:52:34 +0000   Sat, 10 Jun 2023 16:50:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:52:34 +0000   Sat, 10 Jun 2023 16:50:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:52:34 +0000   Sat, 10 Jun 2023 16:50:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:52:34 +0000   Sat, 10 Jun 2023 16:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-656000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5416c05f8f24557a1f570b4cfece9ad
	  System UUID:                c5416c05f8f24557a1f570b4cfece9ad
	  Boot ID:                    ac4f2aa0-b3c2-4357-9cbf-1e64bb963bec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-kwc7h                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     hello-node-connect-58d66798bb-xgv29           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 coredns-5d78c9869d-8gnmt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m11s
	  kube-system                 etcd-functional-656000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-apiserver-functional-656000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-functional-656000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-proxy-bvr5g                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-functional-656000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-5dd9cbfd69-lmfrr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-dl8w6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m10s              kube-proxy       
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 2m24s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m24s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m24s              kubelet          Node functional-656000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s              kubelet          Node functional-656000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s              kubelet          Node functional-656000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m22s              kubelet          Node functional-656000 status is now: NodeReady
	  Normal  RegisteredNode           2m12s              node-controller  Node functional-656000 event: Registered Node functional-656000 in Controller
	  Normal  RegisteredNode           97s                node-controller  Node functional-656000 event: Registered Node functional-656000 in Controller
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node functional-656000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node functional-656000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node functional-656000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           54s                node-controller  Node functional-656000 event: Registered Node functional-656000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.139401] systemd-fstab-generator[3707]: Ignoring "noauto" for root device
	[  +0.117678] systemd-fstab-generator[3718]: Ignoring "noauto" for root device
	[  +0.122555] systemd-fstab-generator[3731]: Ignoring "noauto" for root device
	[  +1.471673] kauditd_printk_skb: 30 callbacks suppressed
	[ +10.053103] systemd-fstab-generator[4363]: Ignoring "noauto" for root device
	[  +0.071183] systemd-fstab-generator[4374]: Ignoring "noauto" for root device
	[  +0.095819] systemd-fstab-generator[4385]: Ignoring "noauto" for root device
	[  +0.090327] systemd-fstab-generator[4397]: Ignoring "noauto" for root device
	[  +0.099222] systemd-fstab-generator[4472]: Ignoring "noauto" for root device
	[  +5.250300] kauditd_printk_skb: 34 callbacks suppressed
	[Jun10 16:51] systemd-fstab-generator[6251]: Ignoring "noauto" for root device
	[  +0.139557] systemd-fstab-generator[6285]: Ignoring "noauto" for root device
	[  +0.089699] systemd-fstab-generator[6296]: Ignoring "noauto" for root device
	[  +0.104464] systemd-fstab-generator[6309]: Ignoring "noauto" for root device
	[ +11.423775] systemd-fstab-generator[6873]: Ignoring "noauto" for root device
	[  +0.084904] systemd-fstab-generator[6884]: Ignoring "noauto" for root device
	[  +0.080370] systemd-fstab-generator[6895]: Ignoring "noauto" for root device
	[  +0.072063] systemd-fstab-generator[6916]: Ignoring "noauto" for root device
	[  +0.085889] systemd-fstab-generator[6986]: Ignoring "noauto" for root device
	[  +0.995107] systemd-fstab-generator[7240]: Ignoring "noauto" for root device
	[  +3.609354] kauditd_printk_skb: 29 callbacks suppressed
	[ +26.471895] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 16:52] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +11.777937] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.581910] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [57a7cc9a5af7] <==
	* {"level":"info","ts":"2023-06-10T16:50:46.519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-10T16:50:46.519Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-10T16:50:46.519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:50:46.519Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:50:48.236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-10T16:50:48.236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:50:48.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-06-10T16:50:48.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-06-10T16:50:48.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T16:50:48.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-06-10T16:50:48.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T16:50:48.240Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:50:48.240Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:50:48.243Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T16:50:48.243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:50:48.243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:50:48.244Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-06-10T16:50:48.240Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-656000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:51:17.417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-10T16:51:17.417Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-656000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	WARNING: 2023/06/10 16:51:17 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-06-10T16:51:17.432Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-06-10T16:51:17.433Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T16:51:17.434Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T16:51:17.434Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-656000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [8daf61a06ede] <==
	* {"level":"info","ts":"2023-06-10T16:51:31.213Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:51:31.213Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T16:51:31.220Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T16:51:31.214Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T16:51:31.220Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:51:31.220Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-06-10T16:51:32.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-10T16:51:32.181Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-656000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:51:32.181Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:51:32.181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:51:32.181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:51:32.181Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:51:32.182Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-06-10T16:51:32.182Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  16:52:38 up 2 min,  0 users,  load average: 0.79, 0.30, 0.11
	Linux functional-656000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b5599e7dd85a] <==
	* I0610 16:51:32.835719       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 16:51:32.878432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:51:32.878460       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 16:51:32.878489       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:51:32.878595       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 16:51:32.878603       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 16:51:32.878655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:51:32.878779       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 16:51:32.878997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:51:33.645221       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:51:33.778560       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:51:34.378624       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:51:34.382033       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:51:34.395480       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 16:51:34.404206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:51:34.407382       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:51:44.927685       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:51:45.002658       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:51:55.112796       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.102.156.225]
	I0610 16:52:05.526413       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 16:52:05.569129       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.99.196.254]
	I0610 16:52:19.058423       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.104.163.247]
	I0610 16:52:35.405596       1 controller.go:624] quota admission added evaluator for: namespaces
	I0610 16:52:35.473439       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.99.42.97]
	I0610 16:52:35.504153       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.55.154]
	
	* 
	* ==> kube-controller-manager [1edb0328d3fa] <==
	* I0610 16:51:45.507485       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:51:45.559977       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:51:45.560035       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 16:51:59.963207       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0610 16:51:59.963422       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0610 16:52:05.527713       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0610 16:52:05.537107       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-xgv29"
	I0610 16:52:19.013988       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0610 16:52:19.017797       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-kwc7h"
	I0610 16:52:35.430622       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
	I0610 16:52:35.436791       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0610 16:52:35.438247       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
	E0610 16:52:35.441382       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0610 16:52:35.444069       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0610 16:52:35.444329       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0610 16:52:35.444340       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0610 16:52:35.446768       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0610 16:52:35.448859       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0610 16:52:35.449041       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0610 16:52:35.449113       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0610 16:52:35.449128       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0610 16:52:35.453165       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0610 16:52:35.453188       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0610 16:52:35.473850       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-lmfrr"
	I0610 16:52:35.480212       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-dl8w6"
	
	* 
	* ==> kube-controller-manager [85408977d450] <==
	* I0610 16:51:01.881484       1 shared_informer.go:318] Caches are synced for expand
	I0610 16:51:01.883622       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0610 16:51:01.885806       1 shared_informer.go:318] Caches are synced for TTL
	I0610 16:51:01.888940       1 shared_informer.go:318] Caches are synced for ephemeral
	I0610 16:51:01.908668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0610 16:51:01.909779       1 shared_informer.go:318] Caches are synced for job
	I0610 16:51:01.913002       1 shared_informer.go:318] Caches are synced for persistent volume
	I0610 16:51:01.913079       1 shared_informer.go:318] Caches are synced for GC
	I0610 16:51:01.913122       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0610 16:51:01.914285       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 16:51:01.914318       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0610 16:51:01.914348       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0610 16:51:01.914405       1 shared_informer.go:318] Caches are synced for daemon sets
	I0610 16:51:01.914418       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0610 16:51:01.915404       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 16:51:01.915506       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 16:51:01.963377       1 shared_informer.go:318] Caches are synced for disruption
	I0610 16:51:01.964671       1 shared_informer.go:318] Caches are synced for HPA
	I0610 16:51:01.972254       1 shared_informer.go:318] Caches are synced for deployment
	I0610 16:51:02.015370       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:51:02.018473       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0610 16:51:02.056561       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 16:51:02.435590       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:51:02.480269       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 16:51:02.480291       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [37477d07a518] <==
	* I0610 16:50:48.932579       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0610 16:50:48.932650       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0610 16:50:48.932676       1 server_others.go:551] "Using iptables proxy"
	I0610 16:50:48.948712       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:50:48.948724       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:50:48.948741       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:50:48.948911       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:50:48.948916       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:50:48.949700       1 config.go:188] "Starting service config controller"
	I0610 16:50:48.949738       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:50:48.949761       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:50:48.949775       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:50:48.949935       1 config.go:315] "Starting node config controller"
	I0610 16:50:48.949969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:50:49.050546       1 shared_informer.go:318] Caches are synced for node config
	I0610 16:50:49.050564       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:50:49.050582       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [afc09444f024] <==
	* I0610 16:51:33.806718       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0610 16:51:33.806752       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0610 16:51:33.806914       1 server_others.go:551] "Using iptables proxy"
	I0610 16:51:33.818828       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 16:51:33.818842       1 server_others.go:190] "Using iptables Proxier"
	I0610 16:51:33.818857       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 16:51:33.819028       1 server.go:657] "Version info" version="v1.27.2"
	I0610 16:51:33.819033       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:51:33.819476       1 config.go:188] "Starting service config controller"
	I0610 16:51:33.819481       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 16:51:33.819494       1 config.go:97] "Starting endpoint slice config controller"
	I0610 16:51:33.819496       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 16:51:33.819620       1 config.go:315] "Starting node config controller"
	I0610 16:51:33.819622       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 16:51:33.920399       1 shared_informer.go:318] Caches are synced for node config
	I0610 16:51:33.920409       1 shared_informer.go:318] Caches are synced for service config
	I0610 16:51:33.920417       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [25267469a440] <==
	* I0610 16:50:46.916468       1 serving.go:348] Generated self-signed cert in-memory
	W0610 16:50:48.898766       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 16:50:48.898876       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:50:48.898898       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 16:50:48.898921       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:50:48.910899       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 16:50:48.910986       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:50:48.911576       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 16:50:48.911589       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:50:48.912157       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 16:50:48.912772       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 16:50:49.012081       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:51:17.431650       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0610 16:51:17.431810       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0610 16:51:17.431863       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0610 16:51:17.431879       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [9fae0abdb39a] <==
	* I0610 16:51:31.138821       1 serving.go:348] Generated self-signed cert in-memory
	W0610 16:51:32.802830       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 16:51:32.802844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 16:51:32.802848       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 16:51:32.802851       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:51:32.827070       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 16:51:32.827132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 16:51:32.827968       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 16:51:32.828826       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 16:51:32.828853       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 16:51:32.829731       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:51:32.930811       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:49:56 UTC, ends at Sat 2023-06-10 16:52:38 UTC. --
	Jun 10 16:52:30 functional-656000 kubelet[7246]: E0610 16:52:30.056797    7246 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-xgv29_default(7120a803-a8d7-4a3d-8acf-960033fcee5d)\"" pod="default/hello-node-connect-58d66798bb-xgv29" podUID=7120a803-a8d7-4a3d-8acf-960033fcee5d
	Jun 10 16:52:30 functional-656000 kubelet[7246]: E0610 16:52:30.203129    7246 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 16:52:30 functional-656000 kubelet[7246]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 16:52:30 functional-656000 kubelet[7246]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 16:52:30 functional-656000 kubelet[7246]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 16:52:30 functional-656000 kubelet[7246]: I0610 16:52:30.278736    7246 scope.go:115] "RemoveContainer" containerID="c8df0af987db20f4bea456ef734dcbf5b853012910420d83f3755e7866e6be3c"
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.341064    7246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksxrf\" (UniqueName: \"kubernetes.io/projected/a585885c-cda2-415c-a9b1-15dd7b9f75ba-kube-api-access-ksxrf\") pod \"a585885c-cda2-415c-a9b1-15dd7b9f75ba\" (UID: \"a585885c-cda2-415c-a9b1-15dd7b9f75ba\") "
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.341087    7246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a585885c-cda2-415c-a9b1-15dd7b9f75ba-test-volume\") pod \"a585885c-cda2-415c-a9b1-15dd7b9f75ba\" (UID: \"a585885c-cda2-415c-a9b1-15dd7b9f75ba\") "
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.342173    7246 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a585885c-cda2-415c-a9b1-15dd7b9f75ba-test-volume" (OuterVolumeSpecName: "test-volume") pod "a585885c-cda2-415c-a9b1-15dd7b9f75ba" (UID: "a585885c-cda2-415c-a9b1-15dd7b9f75ba"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.344169    7246 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a585885c-cda2-415c-a9b1-15dd7b9f75ba-kube-api-access-ksxrf" (OuterVolumeSpecName: "kube-api-access-ksxrf") pod "a585885c-cda2-415c-a9b1-15dd7b9f75ba" (UID: "a585885c-cda2-415c-a9b1-15dd7b9f75ba"). InnerVolumeSpecName "kube-api-access-ksxrf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.441584    7246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ksxrf\" (UniqueName: \"kubernetes.io/projected/a585885c-cda2-415c-a9b1-15dd7b9f75ba-kube-api-access-ksxrf\") on node \"functional-656000\" DevicePath \"\""
	Jun 10 16:52:31 functional-656000 kubelet[7246]: I0610 16:52:31.441604    7246 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a585885c-cda2-415c-a9b1-15dd7b9f75ba-test-volume\") on node \"functional-656000\" DevicePath \"\""
	Jun 10 16:52:32 functional-656000 kubelet[7246]: I0610 16:52:32.111420    7246 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25aad7d4957cbff5520e191f64ca3131d73de89cca82d3b7d869164056e91239"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.196877    7246 scope.go:115] "RemoveContainer" containerID="85b86e0954505abe5a6e55f257bc398b63c98b5872b1e32fb68bb742e9674e8f"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.484360    7246 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: E0610 16:52:35.484398    7246 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a585885c-cda2-415c-a9b1-15dd7b9f75ba" containerName="mount-munger"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.484413    7246 memory_manager.go:346] "RemoveStaleState removing state" podUID="a585885c-cda2-415c-a9b1-15dd7b9f75ba" containerName="mount-munger"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.486621    7246 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.673951    7246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8mz\" (UniqueName: \"kubernetes.io/projected/38a43120-f6b7-4883-8b75-0f4d9245c1e6-kube-api-access-tx8mz\") pod \"dashboard-metrics-scraper-5dd9cbfd69-lmfrr\" (UID: \"38a43120-f6b7-4883-8b75-0f4d9245c1e6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-lmfrr"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.674141    7246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9pm\" (UniqueName: \"kubernetes.io/projected/742f1f9e-6146-4a4c-8212-c87eecc43415-kube-api-access-nt9pm\") pod \"kubernetes-dashboard-5c5cfc8747-dl8w6\" (UID: \"742f1f9e-6146-4a4c-8212-c87eecc43415\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-dl8w6"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.674215    7246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/38a43120-f6b7-4883-8b75-0f4d9245c1e6-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-lmfrr\" (UID: \"38a43120-f6b7-4883-8b75-0f4d9245c1e6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-lmfrr"
	Jun 10 16:52:35 functional-656000 kubelet[7246]: I0610 16:52:35.674260    7246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/742f1f9e-6146-4a4c-8212-c87eecc43415-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-dl8w6\" (UID: \"742f1f9e-6146-4a4c-8212-c87eecc43415\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-dl8w6"
	Jun 10 16:52:36 functional-656000 kubelet[7246]: I0610 16:52:36.138451    7246 scope.go:115] "RemoveContainer" containerID="85b86e0954505abe5a6e55f257bc398b63c98b5872b1e32fb68bb742e9674e8f"
	Jun 10 16:52:36 functional-656000 kubelet[7246]: I0610 16:52:36.138624    7246 scope.go:115] "RemoveContainer" containerID="daa53364c98985622e0fd60f414d04e510a932b6562e4f267ca35cfb9c410a56"
	Jun 10 16:52:36 functional-656000 kubelet[7246]: E0610 16:52:36.138729    7246 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-kwc7h_default(9ab30c40-3e2d-4d1b-97d3-02670664ef59)\"" pod="default/hello-node-7b684b55f9-kwc7h" podUID=9ab30c40-3e2d-4d1b-97d3-02670664ef59
	
	* 
	* ==> storage-provisioner [1785432d7e78] <==
	* I0610 16:51:33.753138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:51:33.757484       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:51:33.757498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:51:51.160914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:51:51.161811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-656000_90c2700e-8c5b-48ed-9bdc-da309ae298f3!
	I0610 16:51:51.164199       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4203cda2-943c-4187-831d-8a1b946783c2", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-656000_90c2700e-8c5b-48ed-9bdc-da309ae298f3 became leader
	I0610 16:51:51.263054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-656000_90c2700e-8c5b-48ed-9bdc-da309ae298f3!
	I0610 16:51:59.963975       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0610 16:51:59.964134       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    08beb585-2448-47d0-bb5f-212fe561da0d 361 0 2023-06-10 16:50:28 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-06-10 16:50:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d21dcb79-d71c-453a-8f16-85fca7f8780b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d21dcb79-d71c-453a-8f16-85fca7f8780b 611 0 2023-06-10 16:51:59 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-06-10 16:51:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-06-10 16:51:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0610 16:51:59.964559       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d21dcb79-d71c-453a-8f16-85fca7f8780b" provisioned
	I0610 16:51:59.964571       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0610 16:51:59.964574       1 volume_store.go:212] Trying to save persistentvolume "pvc-d21dcb79-d71c-453a-8f16-85fca7f8780b"
	I0610 16:51:59.965360       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d21dcb79-d71c-453a-8f16-85fca7f8780b", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0610 16:51:59.969420       1 volume_store.go:219] persistentvolume "pvc-d21dcb79-d71c-453a-8f16-85fca7f8780b" saved
	I0610 16:51:59.969530       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d21dcb79-d71c-453a-8f16-85fca7f8780b", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d21dcb79-d71c-453a-8f16-85fca7f8780b
	
	* 
	* ==> storage-provisioner [f0915e56817f] <==
	* I0610 16:50:47.100632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:50:48.930476       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:50:48.931561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:51:06.343564       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:51:06.343784       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4203cda2-943c-4187-831d-8a1b946783c2", APIVersion:"v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-656000_a98f863e-faa8-4f3f-97e3-b986789d4f8f became leader
	I0610 16:51:06.343822       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-656000_a98f863e-faa8-4f3f-97e3-b986789d4f8f!
	I0610 16:51:06.444633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-656000_a98f863e-faa8-4f3f-97e3-b986789d4f8f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-656000 -n functional-656000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-656000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5dd9cbfd69-lmfrr kubernetes-dashboard-5c5cfc8747-dl8w6
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-656000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-lmfrr kubernetes-dashboard-5c5cfc8747-dl8w6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-656000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-lmfrr kubernetes-dashboard-5c5cfc8747-dl8w6: exit status 1 (42.88825ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656000/192.168.105.4
	Start Time:       Sat, 10 Jun 2023 09:52:27 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://192b500229dc49d74c9c93b9e3a0d79f60e08c1db07c3370f6b99c17e4b5580f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 Jun 2023 09:52:29 -0700
	      Finished:     Sat, 10 Jun 2023 09:52:29 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksxrf (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ksxrf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-656000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.118700646s (2.118705272s including waiting)
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5dd9cbfd69-lmfrr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-dl8w6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-656000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-lmfrr kubernetes-dashboard-5c5cfc8747-dl8w6: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-656000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-656000 image ls --format yaml --alsologtostderr:
I0610 09:52:52.519928    3138 out.go:296] Setting OutFile to fd 1 ...
I0610 09:52:52.520783    3138 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.520788    3138 out.go:309] Setting ErrFile to fd 2...
I0610 09:52:52.520790    3138 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.520867    3138 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 09:52:52.521304    3138 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.521361    3138 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
W0610 09:52:52.521604    3138 cache_images.go:695] error getting status for functional-656000: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/monitor: connect: connection refused
functional_test.go:273: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-179000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-179000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 6bc160e736c8
	Removing intermediate container 6bc160e736c8
	 ---> 5b039519b7ff
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in ebc9d1a52058
	Removing intermediate container ebc9d1a52058
	 ---> 3b4b34934eaa
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 890dc3da3c74
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-179000 -n image-179000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-179000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-656000 image load --daemon                    | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-656000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| docker-env     | functional-656000 docker-env                             | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| update-context | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-656000 image ls                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image          | functional-656000 image load --daemon                    | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-656000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image ls                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image          | functional-656000 image save                             | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-656000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image rm                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-656000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image ls                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image          | functional-656000 image load                             | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image ls                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image          | functional-656000 image save --daemon                    | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-656000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-656000 ssh pgrep                              | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image build -t                         | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | localhost/my-image:functional-656000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-656000                                        | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-656000 image ls                               | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| delete         | -p functional-656000                                     | functional-656000 | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| start          | -p image-179000 --driver=qemu2                           | image-179000      | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:53 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-179000      | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-179000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-179000      | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-179000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:52:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:52:55.775868    3164 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:52:55.775990    3164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:55.775992    3164 out.go:309] Setting ErrFile to fd 2...
	I0610 09:52:55.775993    3164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:55.776064    3164 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:52:55.777083    3164 out.go:303] Setting JSON to false
	I0610 09:52:55.793140    3164 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3146,"bootTime":1686412829,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:52:55.793208    3164 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:52:55.797200    3164 out.go:177] * [image-179000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:52:55.803381    3164 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:52:55.803436    3164 notify.go:220] Checking for updates...
	I0610 09:52:55.809343    3164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:52:55.812413    3164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:52:55.813376    3164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:52:55.816377    3164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:52:55.819390    3164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:52:55.822578    3164 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:52:55.826330    3164 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:52:55.833364    3164 start.go:297] selected driver: qemu2
	I0610 09:52:55.833366    3164 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:52:55.833372    3164 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:52:55.833446    3164 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:52:55.836431    3164 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:52:55.841533    3164 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 09:52:55.841621    3164 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 09:52:55.841631    3164 cni.go:84] Creating CNI manager for ""
	I0610 09:52:55.841635    3164 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:52:55.841639    3164 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 09:52:55.841646    3164 start_flags.go:319] config:
	{Name:image-179000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:image-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:52:55.841727    3164 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:52:55.848391    3164 out.go:177] * Starting control plane node image-179000 in cluster image-179000
	I0610 09:52:55.852384    3164 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:52:55.852415    3164 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:52:55.852427    3164 cache.go:57] Caching tarball of preloaded images
	I0610 09:52:55.852498    3164 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 09:52:55.852502    3164 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 09:52:55.852695    3164 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/config.json ...
	I0610 09:52:55.852706    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/config.json: {Name:mk35fd80ef6cc2c64a8b76a559bebc381d950004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:52:55.852906    3164 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:52:55.852915    3164 start.go:364] acquiring machines lock for image-179000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:52:55.852948    3164 start.go:368] acquired machines lock for "image-179000" in 29.084µs
	I0610 09:52:55.852960    3164 start.go:93] Provisioning new machine with config: &{Name:image-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:image-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:52:55.852981    3164 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:52:55.856359    3164 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 09:52:55.878794    3164 start.go:159] libmachine.API.Create for "image-179000" (driver="qemu2")
	I0610 09:52:55.878815    3164 client.go:168] LocalClient.Create starting
	I0610 09:52:55.878879    3164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:52:55.878897    3164 main.go:141] libmachine: Decoding PEM data...
	I0610 09:52:55.878905    3164 main.go:141] libmachine: Parsing certificate...
	I0610 09:52:55.878947    3164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:52:55.878960    3164 main.go:141] libmachine: Decoding PEM data...
	I0610 09:52:55.878966    3164 main.go:141] libmachine: Parsing certificate...
	I0610 09:52:55.879229    3164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:52:56.428907    3164 main.go:141] libmachine: Creating SSH key...
	I0610 09:52:56.589889    3164 main.go:141] libmachine: Creating Disk image...
	I0610 09:52:56.589895    3164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:52:56.590056    3164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2
	I0610 09:52:56.607341    3164 main.go:141] libmachine: STDOUT: 
	I0610 09:52:56.607357    3164 main.go:141] libmachine: STDERR: 
	I0610 09:52:56.607420    3164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2 +20000M
	I0610 09:52:56.614803    3164 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:52:56.614813    3164 main.go:141] libmachine: STDERR: 
	I0610 09:52:56.614830    3164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2
	I0610 09:52:56.614836    3164 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:52:56.614873    3164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a6:dd:cb:1f:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/disk.qcow2
	I0610 09:52:56.650872    3164 main.go:141] libmachine: STDOUT: 
	I0610 09:52:56.650886    3164 main.go:141] libmachine: STDERR: 
	I0610 09:52:56.650889    3164 main.go:141] libmachine: Attempt 0
	I0610 09:52:56.650898    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:52:56.651135    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:52:56.651157    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:52:56.651166    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:52:56.651181    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:52:58.653317    3164 main.go:141] libmachine: Attempt 1
	I0610 09:52:58.653366    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:52:58.653723    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:52:58.653767    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:52:58.653793    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:52:58.653848    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:00.655977    3164 main.go:141] libmachine: Attempt 2
	I0610 09:53:00.655991    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:53:00.656125    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:53:00.656136    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:00.656140    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:00.656151    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:02.658154    3164 main.go:141] libmachine: Attempt 3
	I0610 09:53:02.658158    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:53:02.658192    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:53:02.658197    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:02.658202    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:02.658206    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:04.660193    3164 main.go:141] libmachine: Attempt 4
	I0610 09:53:04.660197    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:53:04.660239    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:53:04.660244    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:04.660248    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:04.660252    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:06.662368    3164 main.go:141] libmachine: Attempt 5
	I0610 09:53:06.662391    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:53:06.662491    3164 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 09:53:06.662500    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:06.662508    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:06.662512    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:08.663602    3164 main.go:141] libmachine: Attempt 6
	I0610 09:53:08.663616    3164 main.go:141] libmachine: Searching for 46:a6:dd:cb:1f:6f in /var/db/dhcpd_leases ...
	I0610 09:53:08.663759    3164 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:08.663769    3164 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:08.663772    3164 main.go:141] libmachine: Found match: 46:a6:dd:cb:1f:6f
	I0610 09:53:08.663782    3164 main.go:141] libmachine: IP: 192.168.105.5
	I0610 09:53:08.663786    3164 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0610 09:53:10.782188    3164 machine.go:88] provisioning docker machine ...
	I0610 09:53:10.782356    3164 buildroot.go:166] provisioning hostname "image-179000"
	I0610 09:53:10.782646    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:10.783623    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:10.783639    3164 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-179000 && echo "image-179000" | sudo tee /etc/hostname
	I0610 09:53:10.863705    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: image-179000
	
	I0610 09:53:10.863812    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:10.864256    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:10.864267    3164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-179000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-179000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-179000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:53:10.927329    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:53:10.927352    3164 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:53:10.927373    3164 buildroot.go:174] setting up certificates
	I0610 09:53:10.927379    3164 provision.go:83] configureAuth start
	I0610 09:53:10.927383    3164 provision.go:138] copyHostCerts
	I0610 09:53:10.927529    3164 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem, removing ...
	I0610 09:53:10.927535    3164 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem
	I0610 09:53:10.927728    3164 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:53:10.928007    3164 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem, removing ...
	I0610 09:53:10.928009    3164 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem
	I0610 09:53:10.928071    3164 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:53:10.928211    3164 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem, removing ...
	I0610 09:53:10.928213    3164 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem
	I0610 09:53:10.928271    3164 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:53:10.928393    3164 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.image-179000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-179000]
	I0610 09:53:11.092988    3164 provision.go:172] copyRemoteCerts
	I0610 09:53:11.093034    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:53:11.093042    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:11.121153    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:53:11.128452    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0610 09:53:11.135481    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:53:11.142160    3164 provision.go:86] duration metric: configureAuth took 214.781125ms
	I0610 09:53:11.142166    3164 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:53:11.142273    3164 config.go:182] Loaded profile config "image-179000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:53:11.142308    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:11.142525    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:11.142528    3164 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:53:11.194696    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:53:11.194700    3164 buildroot.go:70] root file system type: tmpfs
	I0610 09:53:11.194758    3164 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:53:11.194812    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:11.195410    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:11.195513    3164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:53:11.250895    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:53:11.250945    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:11.251191    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:11.251198    3164 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:53:11.567854    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:53:11.567862    3164 machine.go:91] provisioned docker machine in 785.576667ms
	I0610 09:53:11.567866    3164 client.go:171] LocalClient.Create took 15.689281583s
	I0610 09:53:11.567874    3164 start.go:167] duration metric: libmachine.API.Create for "image-179000" took 15.68931625s
	I0610 09:53:11.567877    3164 start.go:300] post-start starting for "image-179000" (driver="qemu2")
	I0610 09:53:11.567879    3164 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:53:11.567939    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:53:11.567962    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:11.594020    3164 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:53:11.595257    3164 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:53:11.595263    3164 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:53:11.595326    3164 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:53:11.595431    3164 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem -> 15642.pem in /etc/ssl/certs
	I0610 09:53:11.595538    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:53:11.597960    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem --> /etc/ssl/certs/15642.pem (1708 bytes)
	I0610 09:53:11.605037    3164 start.go:303] post-start completed in 37.157291ms
	I0610 09:53:11.605399    3164 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/config.json ...
	I0610 09:53:11.605552    3164 start.go:128] duration metric: createHost completed in 15.752803s
	I0610 09:53:11.605575    3164 main.go:141] libmachine: Using SSH client type: native
	I0610 09:53:11.605790    3164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011ec6d0] 0x1011ef130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 09:53:11.605793    3164 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:53:11.658495    3164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686415991.646309794
	
	I0610 09:53:11.658499    3164 fix.go:207] guest clock: 1686415991.646309794
	I0610 09:53:11.658502    3164 fix.go:220] Guest: 2023-06-10 09:53:11.646309794 -0700 PDT Remote: 2023-06-10 09:53:11.605553 -0700 PDT m=+15.849604293 (delta=40.756794ms)
	I0610 09:53:11.658511    3164 fix.go:191] guest clock delta is within tolerance: 40.756794ms
	I0610 09:53:11.658513    3164 start.go:83] releasing machines lock for "image-179000", held for 15.805796958s
	I0610 09:53:11.658777    3164 ssh_runner.go:195] Run: cat /version.json
	I0610 09:53:11.658783    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:11.658794    3164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:53:11.658810    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:11.687189    3164 ssh_runner.go:195] Run: systemctl --version
	I0610 09:53:11.728350    3164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:53:11.730094    3164 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:53:11.730126    3164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 09:53:11.735160    3164 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:53:11.735164    3164 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:53:11.735238    3164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:53:11.742903    3164 docker.go:633] Got preloaded images: 
	I0610 09:53:11.742910    3164 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 09:53:11.742953    3164 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:53:11.745720    3164 ssh_runner.go:195] Run: which lz4
	I0610 09:53:11.747221    3164 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:53:11.748536    3164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:53:11.748547    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 09:53:13.036382    3164 docker.go:597] Took 1.289226 seconds to copy over tarball
	I0610 09:53:13.036435    3164 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:53:14.072842    3164 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036410917s)
	I0610 09:53:14.072850    3164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:53:14.088803    3164 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:53:14.092502    3164 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 09:53:14.097651    3164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:53:14.158338    3164 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:53:15.318656    3164 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160322291s)
	I0610 09:53:15.318674    3164 start.go:481] detecting cgroup driver to use...
	I0610 09:53:15.318742    3164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:53:15.324080    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 09:53:15.327831    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:53:15.330814    3164 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:53:15.330836    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:53:15.333749    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:53:15.336797    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:53:15.340125    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:53:15.343462    3164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:53:15.346407    3164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:53:15.349481    3164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:53:15.353020    3164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:53:15.356376    3164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:53:15.418341    3164 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:53:15.426758    3164 start.go:481] detecting cgroup driver to use...
	I0610 09:53:15.426829    3164 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:53:15.432590    3164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:53:15.437472    3164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:53:15.446733    3164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:53:15.451452    3164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:53:15.455981    3164 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:53:15.521073    3164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:53:15.527428    3164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:53:15.533802    3164 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:53:15.535182    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:53:15.537866    3164 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:53:15.542945    3164 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:53:15.606657    3164 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:53:15.665888    3164 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:53:15.665898    3164 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:53:15.671105    3164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:53:15.730074    3164 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:53:16.898170    3164 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168101s)
	I0610 09:53:16.898220    3164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:53:16.959110    3164 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 09:53:17.017752    3164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 09:53:17.083839    3164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:53:17.143344    3164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 09:53:17.149646    3164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:53:17.222779    3164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 09:53:17.244746    3164 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 09:53:17.244829    3164 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 09:53:17.248426    3164 start.go:549] Will wait 60s for crictl version
	I0610 09:53:17.248472    3164 ssh_runner.go:195] Run: which crictl
	I0610 09:53:17.249867    3164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 09:53:17.264414    3164 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 09:53:17.264469    3164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:53:17.273204    3164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:53:17.288563    3164 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 09:53:17.288711    3164 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:53:17.290191    3164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:53:17.293736    3164 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:53:17.293775    3164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:53:17.299731    3164 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:53:17.299735    3164 docker.go:563] Images already preloaded, skipping extraction
	I0610 09:53:17.299779    3164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:53:17.305365    3164 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 09:53:17.305371    3164 cache_images.go:84] Images are preloaded, skipping loading
	I0610 09:53:17.305426    3164 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:53:17.312922    3164 cni.go:84] Creating CNI manager for ""
	I0610 09:53:17.312928    3164 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:53:17.312939    3164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:53:17.312947    3164 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-179000 NodeName:image-179000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 09:53:17.313015    3164 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-179000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:53:17.313040    3164 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-179000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:image-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:53:17.313092    3164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 09:53:17.316751    3164 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:53:17.316778    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:53:17.320015    3164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0610 09:53:17.325131    3164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 09:53:17.330262    3164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0610 09:53:17.335306    3164 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0610 09:53:17.336548    3164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:53:17.340489    3164 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000 for IP: 192.168.105.5
	I0610 09:53:17.340495    3164 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.340625    3164 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:53:17.340662    3164 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:53:17.340691    3164 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.key
	I0610 09:53:17.340702    3164 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.crt with IP's: []
	I0610 09:53:17.429558    3164 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.crt ...
	I0610 09:53:17.429561    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.crt: {Name:mk47dba4aa73dcc8665913130251a85719da433e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.429773    3164 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.key ...
	I0610 09:53:17.429775    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/client.key: {Name:mk8f441da5153dc54e6ae4d40f067703dff315b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.429881    3164 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key.e69b33ca
	I0610 09:53:17.429886    3164 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:53:17.471555    3164 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt.e69b33ca ...
	I0610 09:53:17.471557    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt.e69b33ca: {Name:mk7dfb39f79e4a9e9228f3efdb28a4caf6e2f545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.471696    3164 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key.e69b33ca ...
	I0610 09:53:17.471698    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key.e69b33ca: {Name:mkde5d70265b4f90bde4442edc7bfb073578f1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.471812    3164 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt
	I0610 09:53:17.471905    3164 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key
	I0610 09:53:17.471987    3164 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.key
	I0610 09:53:17.471992    3164 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.crt with IP's: []
	I0610 09:53:17.590992    3164 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.crt ...
	I0610 09:53:17.590995    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.crt: {Name:mk22682d66885ebca280593a4b46f527e99ba73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.591117    3164 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.key ...
	I0610 09:53:17.591118    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.key: {Name:mk39537f9be5f97f0f8cfc141aa3c8a291aa5b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:17.591374    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564.pem (1338 bytes)
	W0610 09:53:17.591399    3164 certs.go:433] ignoring /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564_empty.pem, impossibly tiny 0 bytes
	I0610 09:53:17.591406    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:53:17.591434    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:53:17.591453    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:53:17.591470    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:53:17.591513    3164 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem (1708 bytes)
	I0610 09:53:17.591904    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:53:17.600997    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:53:17.608530    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:53:17.615897    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/image-179000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 09:53:17.622781    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:53:17.629370    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:53:17.636546    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:53:17.643612    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:53:17.650459    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem --> /usr/share/ca-certificates/15642.pem (1708 bytes)
	I0610 09:53:17.657183    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:53:17.663868    3164 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564.pem --> /usr/share/ca-certificates/1564.pem (1338 bytes)
	I0610 09:53:17.670746    3164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:53:17.675437    3164 ssh_runner.go:195] Run: openssl version
	I0610 09:53:17.677249    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15642.pem && ln -fs /usr/share/ca-certificates/15642.pem /etc/ssl/certs/15642.pem"
	I0610 09:53:17.680805    3164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15642.pem
	I0610 09:53:17.682445    3164 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:49 /usr/share/ca-certificates/15642.pem
	I0610 09:53:17.682468    3164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15642.pem
	I0610 09:53:17.684228    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15642.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 09:53:17.687109    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:53:17.689986    3164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:53:17.691483    3164 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:53:17.691499    3164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:53:17.693477    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:53:17.696985    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1564.pem && ln -fs /usr/share/ca-certificates/1564.pem /etc/ssl/certs/1564.pem"
	I0610 09:53:17.700570    3164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1564.pem
	I0610 09:53:17.702161    3164 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:49 /usr/share/ca-certificates/1564.pem
	I0610 09:53:17.702176    3164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1564.pem
	I0610 09:53:17.704060    3164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1564.pem /etc/ssl/certs/51391683.0"
	I0610 09:53:17.707092    3164 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:53:17.708430    3164 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:53:17.708457    3164 kubeadm.go:404] StartCluster: {Name:image-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.2 ClusterName:image-179000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:53:17.708515    3164 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:53:17.714031    3164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:53:17.717302    3164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:53:17.720402    3164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:53:17.723113    3164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:53:17.723124    3164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 09:53:17.745922    3164 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 09:53:17.746021    3164 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:53:17.799385    3164 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:53:17.799457    3164 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:53:17.799505    3164 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:53:17.862826    3164 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:53:17.866906    3164 out.go:204]   - Generating certificates and keys ...
	I0610 09:53:17.866946    3164 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:53:17.866992    3164 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:53:17.966197    3164 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:53:18.142565    3164 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:53:18.285466    3164 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:53:18.500004    3164 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:53:18.576988    3164 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:53:18.577051    3164 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-179000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0610 09:53:18.952313    3164 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:53:18.952414    3164 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-179000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0610 09:53:19.157439    3164 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:53:19.223696    3164 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:53:19.351591    3164 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:53:19.351617    3164 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:53:19.409544    3164 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:53:19.484189    3164 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:53:19.545880    3164 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:53:19.679124    3164 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:53:19.686024    3164 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:53:19.686072    3164 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:53:19.686090    3164 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:53:19.751914    3164 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:53:19.760110    3164 out.go:204]   - Booting up control plane ...
	I0610 09:53:19.760190    3164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:53:19.760270    3164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:53:19.760306    3164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:53:19.760392    3164 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:53:19.760464    3164 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:53:23.757690    3164 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001160 seconds
	I0610 09:53:23.757792    3164 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:53:23.765808    3164 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:53:24.282993    3164 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:53:24.283156    3164 kubeadm.go:322] [mark-control-plane] Marking the node image-179000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 09:53:24.789609    3164 kubeadm.go:322] [bootstrap-token] Using token: 25sxro.8mia14dknztj7ni0
	I0610 09:53:24.794696    3164 out.go:204]   - Configuring RBAC rules ...
	I0610 09:53:24.794758    3164 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:53:24.795891    3164 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:53:24.803422    3164 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:53:24.804785    3164 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:53:24.806125    3164 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:53:24.807281    3164 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:53:24.812495    3164 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:53:24.983602    3164 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:53:25.198663    3164 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:53:25.199029    3164 kubeadm.go:322] 
	I0610 09:53:25.199063    3164 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:53:25.199065    3164 kubeadm.go:322] 
	I0610 09:53:25.199105    3164 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:53:25.199106    3164 kubeadm.go:322] 
	I0610 09:53:25.199118    3164 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:53:25.199152    3164 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:53:25.199178    3164 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:53:25.199181    3164 kubeadm.go:322] 
	I0610 09:53:25.199206    3164 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 09:53:25.199207    3164 kubeadm.go:322] 
	I0610 09:53:25.199237    3164 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 09:53:25.199239    3164 kubeadm.go:322] 
	I0610 09:53:25.199268    3164 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:53:25.199311    3164 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:53:25.199342    3164 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:53:25.199344    3164 kubeadm.go:322] 
	I0610 09:53:25.199388    3164 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:53:25.199424    3164 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:53:25.199425    3164 kubeadm.go:322] 
	I0610 09:53:25.199471    3164 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 25sxro.8mia14dknztj7ni0 \
	I0610 09:53:25.199518    3164 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:53:25.199527    3164 kubeadm.go:322] 	--control-plane 
	I0610 09:53:25.199529    3164 kubeadm.go:322] 
	I0610 09:53:25.199571    3164 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:53:25.199572    3164 kubeadm.go:322] 
	I0610 09:53:25.199609    3164 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 25sxro.8mia14dknztj7ni0 \
	I0610 09:53:25.199660    3164 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:53:25.199717    3164 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:53:25.199801    3164 kubeadm.go:322] W0610 16:53:17.787355    1336 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:53:25.199884    3164 kubeadm.go:322] W0610 16:53:19.742749    1336 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 09:53:25.199892    3164 cni.go:84] Creating CNI manager for ""
	I0610 09:53:25.199899    3164 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:53:25.206921    3164 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 09:53:25.210933    3164 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 09:53:25.213942    3164 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 09:53:25.218637    3164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:53:25.218715    3164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:53:25.218716    3164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=image-179000 minikube.k8s.io/updated_at=2023_06_10T09_53_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:53:25.222054    3164 ops.go:34] apiserver oom_adj: -16
	I0610 09:53:25.287302    3164 kubeadm.go:1076] duration metric: took 68.622625ms to wait for elevateKubeSystemPrivileges.
	I0610 09:53:25.288490    3164 kubeadm.go:406] StartCluster complete in 7.5801465s
	I0610 09:53:25.288498    3164 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:25.288579    3164 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:53:25.288887    3164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:25.289075    3164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:53:25.289120    3164 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 09:53:25.289154    3164 addons.go:66] Setting storage-provisioner=true in profile "image-179000"
	I0610 09:53:25.289160    3164 addons.go:228] Setting addon storage-provisioner=true in "image-179000"
	I0610 09:53:25.289182    3164 host.go:66] Checking if "image-179000" exists ...
	I0610 09:53:25.289181    3164 addons.go:66] Setting default-storageclass=true in profile "image-179000"
	I0610 09:53:25.289189    3164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-179000"
	I0610 09:53:25.289325    3164 config.go:182] Loaded profile config "image-179000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:53:25.294932    3164 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:53:25.299252    3164 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:53:25.299256    3164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:53:25.299263    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:25.304823    3164 addons.go:228] Setting addon default-storageclass=true in "image-179000"
	I0610 09:53:25.304839    3164 host.go:66] Checking if "image-179000" exists ...
	I0610 09:53:25.305495    3164 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:53:25.305498    3164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:53:25.305505    3164 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/image-179000/id_rsa Username:docker}
	I0610 09:53:25.335230    3164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:53:25.339902    3164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:53:25.370410    3164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:53:25.745142    3164 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:53:25.811456    3164 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-179000" context rescaled to 1 replicas
	I0610 09:53:25.811470    3164 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:53:25.814472    3164 out.go:177] * Verifying Kubernetes components...
	I0610 09:53:25.823537    3164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:53:25.845477    3164 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 09:53:25.842922    3164 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:53:25.853374    3164 addons.go:499] enable addons completed in 564.27425ms: enabled=[default-storageclass storage-provisioner]
	I0610 09:53:25.853394    3164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:53:25.857514    3164 api_server.go:72] duration metric: took 46.034833ms to wait for apiserver process to appear ...
	I0610 09:53:25.857522    3164 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:53:25.857526    3164 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0610 09:53:25.860572    3164 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0610 09:53:25.861196    3164 api_server.go:141] control plane version: v1.27.2
	I0610 09:53:25.861200    3164 api_server.go:131] duration metric: took 3.6765ms to wait for apiserver health ...
	I0610 09:53:25.861202    3164 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:53:25.863965    3164 system_pods.go:59] 5 kube-system pods found
	I0610 09:53:25.863970    3164 system_pods.go:61] "etcd-image-179000" [41c1f971-2faa-45f7-af89-1d5e08de4cd1] Pending
	I0610 09:53:25.863972    3164 system_pods.go:61] "kube-apiserver-image-179000" [7c408cfa-4d53-424a-bdd3-53549a0a11ea] Pending
	I0610 09:53:25.863974    3164 system_pods.go:61] "kube-controller-manager-image-179000" [767a80c9-e80f-4ccd-a7ab-9e57e6bb08a0] Pending
	I0610 09:53:25.863976    3164 system_pods.go:61] "kube-scheduler-image-179000" [2dab10d8-905d-41f4-9c0f-37fdb327384d] Pending
	I0610 09:53:25.863977    3164 system_pods.go:61] "storage-provisioner" [d88ec518-c06d-4ed5-be4f-12f9f0219941] Pending
	I0610 09:53:25.863979    3164 system_pods.go:74] duration metric: took 2.775ms to wait for pod list to return data ...
	I0610 09:53:25.863982    3164 kubeadm.go:581] duration metric: took 52.502917ms to wait for : map[apiserver:true system_pods:true] ...
	I0610 09:53:25.863986    3164 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:53:25.865305    3164 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:53:25.865311    3164 node_conditions.go:123] node cpu capacity is 2
	I0610 09:53:25.865315    3164 node_conditions.go:105] duration metric: took 1.32775ms to run NodePressure ...
	I0610 09:53:25.865319    3164 start.go:228] waiting for startup goroutines ...
	I0610 09:53:25.865322    3164 start.go:233] waiting for cluster config update ...
	I0610 09:53:25.865326    3164 start.go:242] writing updated cluster config ...
	I0610 09:53:25.865563    3164 ssh_runner.go:195] Run: rm -f paused
	I0610 09:53:25.896014    3164 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 09:53:25.899601    3164 out.go:177] 
	W0610 09:53:25.903474    3164 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 09:53:25.907486    3164 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 09:53:25.915499    3164 out.go:177] * Done! kubectl is now configured to use "image-179000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:53:07 UTC, ends at Sat 2023-06-10 16:53:28 UTC. --
	Jun 10 16:53:20 image-179000 cri-dockerd[1171]: time="2023-06-10T16:53:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/117409ceaf81f5b4c2ada04f796329b0d3f79e3306bc27b3642427259622eba9/resolv.conf as [nameserver 192.168.105.1]"
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.754143340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.754252090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.754277673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.754301840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:20 image-179000 cri-dockerd[1171]: time="2023-06-10T16:53:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2b5ebee9e46fabbfd216aed1ef7edc8835ba0562ef4970bb8360a0fb1105f652/resolv.conf as [nameserver 192.168.105.1]"
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.827255131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.827329715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.827336673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.827341048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.829107423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.829125006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.829133965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:53:20 image-179000 dockerd[946]: time="2023-06-10T16:53:20.829192298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:28 image-179000 dockerd[940]: time="2023-06-10T16:53:28.114225801Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 16:53:28 image-179000 dockerd[940]: time="2023-06-10T16:53:28.232677677Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 16:53:28 image-179000 dockerd[940]: time="2023-06-10T16:53:28.247790593Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.273196885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.273228510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.273240593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.273394260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:53:28 image-179000 dockerd[940]: time="2023-06-10T16:53:28.390313593Z" level=info msg="ignoring event" container=890dc3da3c74c2d7188d46a66548bf02dba97d42a04a7ffdbb745fbd1c5c8f2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.390478052Z" level=info msg="shim disconnected" id=890dc3da3c74c2d7188d46a66548bf02dba97d42a04a7ffdbb745fbd1c5c8f2c namespace=moby
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.390504552Z" level=warning msg="cleaning up after shim disconnected" id=890dc3da3c74c2d7188d46a66548bf02dba97d42a04a7ffdbb745fbd1c5c8f2c namespace=moby
	Jun 10 16:53:28 image-179000 dockerd[946]: time="2023-06-10T16:53:28.390508552Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	f6d75a3c7d444       305d7ed1dae28       8 seconds ago       Running             kube-scheduler            0                   2b5ebee9e46fa
	ed93cdeca9184       2ee705380c3c5       8 seconds ago       Running             kube-controller-manager   0                   117409ceaf81f
	4ef6bf18854fb       72c9df6be7f1b       8 seconds ago       Running             kube-apiserver            0                   46c5e2ec68418
	70259f8bbf90d       24bc64e911039       8 seconds ago       Running             etcd                      0                   33741f8c2dbb1
	
	* 
	* ==> describe nodes <==
	* Name:               image-179000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-179000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=image-179000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_53_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:53:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-179000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:53:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:53:27 +0000   Sat, 10 Jun 2023 16:53:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:53:27 +0000   Sat, 10 Jun 2023 16:53:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:53:27 +0000   Sat, 10 Jun 2023 16:53:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:53:27 +0000   Sat, 10 Jun 2023 16:53:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-179000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 76566666a4164f7d841dc224ce67f8f3
	  System UUID:                76566666a4164f7d841dc224ce67f8f3
	  Boot ID:                    22b68958-a994-433c-b284-af846ea82907
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-179000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-179000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-179000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-179000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x7 over 8s)  kubelet  Node image-179000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x7 over 8s)  kubelet  Node image-179000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-179000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-179000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-179000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-179000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                1s               kubelet  Node image-179000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jun10 16:53] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.625718] EINJ: EINJ table not found.
	[  +0.490941] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.044708] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000806] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.318250] systemd-fstab-generator[478]: Ignoring "noauto" for root device
	[  +0.056813] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +2.728945] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +1.261112] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.186588] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +0.061825] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[  +0.061877] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +1.148869] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079711] systemd-fstab-generator[1091]: Ignoring "noauto" for root device
	[  +0.059259] systemd-fstab-generator[1102]: Ignoring "noauto" for root device
	[  +0.067017] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
	[  +0.058335] systemd-fstab-generator[1124]: Ignoring "noauto" for root device
	[  +0.081024] systemd-fstab-generator[1164]: Ignoring "noauto" for root device
	[  +2.521472] systemd-fstab-generator[1429]: Ignoring "noauto" for root device
	[  +5.130722] systemd-fstab-generator[2311]: Ignoring "noauto" for root device
	[  +3.434243] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [70259f8bbf90] <==
	* {"level":"info","ts":"2023-06-10T16:53:20.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-06-10T16:53:20.932Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-06-10T16:53:20.933Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T16:53:20.933Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T16:53:20.933Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T16:53:20.933Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-10T16:53:20.933Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T16:53:21.720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-10T16:53:21.721Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-179000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T16:53:21.721Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:53:21.721Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T16:53:21.722Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-06-10T16:53:21.722Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:53:21.722Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:53:21.722Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:53:21.722Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T16:53:21.725Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T16:53:21.725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T16:53:21.726Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  16:53:28 up 0 min,  0 users,  load average: 1.25, 0.28, 0.09
	Linux image-179000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4ef6bf18854f] <==
	* I0610 16:53:22.399379       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 16:53:22.407622       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 16:53:22.407694       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:53:22.407775       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 16:53:22.407788       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:53:22.407694       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:53:22.408574       1 controller.go:624] quota admission added evaluator for: namespaces
	I0610 16:53:22.409115       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 16:53:22.409124       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 16:53:22.412311       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:53:22.432165       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 16:53:23.164506       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:53:23.311416       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 16:53:23.313806       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:53:23.313817       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 16:53:23.453226       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:53:23.466017       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 16:53:23.557800       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 16:53:23.559851       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0610 16:53:23.560169       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 16:53:23.561510       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:53:24.345458       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 16:53:24.966968       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 16:53:24.971207       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 16:53:24.975402       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ed93cdeca918] <==
	* I0610 16:53:24.372867       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0610 16:53:24.372922       1 controllermanager.go:638] "Started controller" controller="nodelifecycle"
	I0610 16:53:24.373005       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0610 16:53:24.373047       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0610 16:53:24.373065       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0610 16:53:24.392447       1 controllermanager.go:638] "Started controller" controller="daemonset"
	I0610 16:53:24.392476       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0610 16:53:24.392480       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0610 16:53:24.441646       1 shared_informer.go:318] Caches are synced for tokens
	I0610 16:53:24.543710       1 controllermanager.go:638] "Started controller" controller="job"
	I0610 16:53:24.543757       1 job_controller.go:202] Starting job controller
	I0610 16:53:24.543764       1 shared_informer.go:311] Waiting for caches to sync for job
	I0610 16:53:24.692880       1 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
	I0610 16:53:24.692965       1 controller.go:169] "Starting ephemeral volume controller"
	I0610 16:53:24.692976       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0610 16:53:24.942726       1 controllermanager.go:638] "Started controller" controller="garbagecollector"
	I0610 16:53:24.942828       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0610 16:53:24.942839       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0610 16:53:24.942849       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0610 16:53:25.193151       1 controllermanager.go:638] "Started controller" controller="statefulset"
	I0610 16:53:25.193205       1 stateful_set.go:161] "Starting stateful set controller"
	I0610 16:53:25.193212       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0610 16:53:25.242649       1 controllermanager.go:638] "Started controller" controller="csrapproving"
	I0610 16:53:25.242682       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0610 16:53:25.242687       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	
	* 
	* ==> kube-scheduler [f6d75a3c7d44] <==
	* W0610 16:53:22.377172       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:53:22.377192       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 16:53:22.377221       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:53:22.377229       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 16:53:22.377247       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 16:53:22.377250       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 16:53:22.377280       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:53:22.377287       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 16:53:22.377326       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:53:22.377364       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 16:53:22.377393       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:53:22.377400       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 16:53:22.377436       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:53:22.377443       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 16:53:22.377475       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:53:22.377479       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 16:53:22.377507       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:53:22.377512       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 16:53:22.377327       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 16:53:22.377896       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 16:53:22.377955       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:53:22.377963       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 16:53:23.360927       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 16:53:23.360947       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0610 16:53:23.774116       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:53:07 UTC, ends at Sat 2023-06-10 16:53:28 UTC. --
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.128975    2329 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.129015    2329 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.129028    2329 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.129042    2329 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 16:53:25 image-179000 kubelet[2329]: E0610 16:53:25.135431    2329 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-image-179000\" already exists" pod="kube-system/etcd-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211051    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72b626a241d69c217beef3f77b562ab6-ca-certs\") pod \"kube-apiserver-image-179000\" (UID: \"72b626a241d69c217beef3f77b562ab6\") " pod="kube-system/kube-apiserver-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211069    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/678b5f2a4011091c2a03d9525c0ff8f6-flexvolume-dir\") pod \"kube-controller-manager-image-179000\" (UID: \"678b5f2a4011091c2a03d9525c0ff8f6\") " pod="kube-system/kube-controller-manager-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211080    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a25136fa3f3dc27af5e0331a6acab0c-kubeconfig\") pod \"kube-scheduler-image-179000\" (UID: \"0a25136fa3f3dc27af5e0331a6acab0c\") " pod="kube-system/kube-scheduler-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211297    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1d4f9c8ed8f60eeea6b92656ab5f7a84-etcd-data\") pod \"etcd-image-179000\" (UID: \"1d4f9c8ed8f60eeea6b92656ab5f7a84\") " pod="kube-system/etcd-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211314    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72b626a241d69c217beef3f77b562ab6-k8s-certs\") pod \"kube-apiserver-image-179000\" (UID: \"72b626a241d69c217beef3f77b562ab6\") " pod="kube-system/kube-apiserver-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211323    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72b626a241d69c217beef3f77b562ab6-usr-share-ca-certificates\") pod \"kube-apiserver-image-179000\" (UID: \"72b626a241d69c217beef3f77b562ab6\") " pod="kube-system/kube-apiserver-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211334    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/678b5f2a4011091c2a03d9525c0ff8f6-ca-certs\") pod \"kube-controller-manager-image-179000\" (UID: \"678b5f2a4011091c2a03d9525c0ff8f6\") " pod="kube-system/kube-controller-manager-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211342    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/678b5f2a4011091c2a03d9525c0ff8f6-k8s-certs\") pod \"kube-controller-manager-image-179000\" (UID: \"678b5f2a4011091c2a03d9525c0ff8f6\") " pod="kube-system/kube-controller-manager-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211351    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/678b5f2a4011091c2a03d9525c0ff8f6-kubeconfig\") pod \"kube-controller-manager-image-179000\" (UID: \"678b5f2a4011091c2a03d9525c0ff8f6\") " pod="kube-system/kube-controller-manager-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211363    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/678b5f2a4011091c2a03d9525c0ff8f6-usr-share-ca-certificates\") pod \"kube-controller-manager-image-179000\" (UID: \"678b5f2a4011091c2a03d9525c0ff8f6\") " pod="kube-system/kube-controller-manager-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.211371    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1d4f9c8ed8f60eeea6b92656ab5f7a84-etcd-certs\") pod \"etcd-image-179000\" (UID: \"1d4f9c8ed8f60eeea6b92656ab5f7a84\") " pod="kube-system/etcd-image-179000"
	Jun 10 16:53:25 image-179000 kubelet[2329]: I0610 16:53:25.997096    2329 apiserver.go:52] "Watching apiserver"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.009960    2329 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.015696    2329 reconciler.go:41] "Reconciler: start to sync state"
	Jun 10 16:53:26 image-179000 kubelet[2329]: E0610 16:53:26.063346    2329 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-179000\" already exists" pod="kube-system/kube-apiserver-image-179000"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.074711    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-179000" podStartSLOduration=1.074686842 podCreationTimestamp="2023-06-10 16:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:53:26.070367134 +0000 UTC m=+1.116948334" watchObservedRunningTime="2023-06-10 16:53:26.074686842 +0000 UTC m=+1.121268043"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.086108    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-179000" podStartSLOduration=1.086077717 podCreationTimestamp="2023-06-10 16:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:53:26.074769176 +0000 UTC m=+1.121350376" watchObservedRunningTime="2023-06-10 16:53:26.086077717 +0000 UTC m=+1.132658876"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.086248    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-179000" podStartSLOduration=2.086240634 podCreationTimestamp="2023-06-10 16:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:53:26.085913051 +0000 UTC m=+1.132494251" watchObservedRunningTime="2023-06-10 16:53:26.086240634 +0000 UTC m=+1.132821835"
	Jun 10 16:53:26 image-179000 kubelet[2329]: I0610 16:53:26.094155    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-179000" podStartSLOduration=1.094135259 podCreationTimestamp="2023-06-10 16:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 16:53:26.090290634 +0000 UTC m=+1.136871835" watchObservedRunningTime="2023-06-10 16:53:26.094135259 +0000 UTC m=+1.140716460"
	Jun 10 16:53:27 image-179000 kubelet[2329]: I0610 16:53:27.306662    2329 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-179000 -n image-179000
helpers_test.go:261: (dbg) Run:  kubectl --context image-179000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-179000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-179000 describe pod storage-provisioner: exit status 1 (38.513125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-179000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (54.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-659000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-659000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.079208s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-659000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-659000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [085de9bf-38fe-4ff6-8214-2be4fe48b5c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [085de9bf-38fe-4ff6-8214-2be4fe48b5c7] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.017105667s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-659000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.035248s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons disable ingress-dns --alsologtostderr -v=1: (10.510513917s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons disable ingress --alsologtostderr -v=1: (7.067919166s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-659000 -n ingress-addon-legacy-659000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-656000 image ls                               | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image   | functional-656000 image load                             | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-656000 image ls                               | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| image   | functional-656000 image save --daemon                    | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-656000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-656000                                        | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-656000                                        | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-656000 ssh pgrep                              | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-656000                                        | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-656000 image build -t                         | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | localhost/my-image:functional-656000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-656000                                        | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-656000 image ls                               | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| delete  | -p functional-656000                                     | functional-656000           | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:52 PDT |
	| start   | -p image-179000 --driver=qemu2                           | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:52 PDT | 10 Jun 23 09:53 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-179000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-179000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-179000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-179000                                          |                             |         |         |                     |                     |
	| delete  | -p image-179000                                          | image-179000                | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:53 PDT |
	| start   | -p ingress-addon-legacy-659000                           | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:53 PDT | 10 Jun 23 09:54 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-659000                              | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:54 PDT | 10 Jun 23 09:55 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-659000                              | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:55 PDT | 10 Jun 23 09:55 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-659000                              | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:55 PDT | 10 Jun 23 09:55 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-659000 ip                           | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:55 PDT | 10 Jun 23 09:55 PDT |
	| addons  | ingress-addon-legacy-659000                              | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:55 PDT | 10 Jun 23 09:55 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-659000                              | ingress-addon-legacy-659000 | jenkins | v1.30.1 | 10 Jun 23 09:55 PDT | 10 Jun 23 09:56 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:53:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:53:29.521024    3202 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:53:29.521155    3202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:53:29.521158    3202 out.go:309] Setting ErrFile to fd 2...
	I0610 09:53:29.521161    3202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:53:29.521249    3202 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:53:29.522345    3202 out.go:303] Setting JSON to false
	I0610 09:53:29.538254    3202 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3180,"bootTime":1686412829,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:53:29.538337    3202 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:53:29.542425    3202 out.go:177] * [ingress-addon-legacy-659000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:53:29.549351    3202 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:53:29.549438    3202 notify.go:220] Checking for updates...
	I0610 09:53:29.553291    3202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:53:29.556259    3202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:53:29.559326    3202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:53:29.562327    3202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:53:29.565346    3202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:53:29.568439    3202 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:53:29.572328    3202 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 09:53:29.579323    3202 start.go:297] selected driver: qemu2
	I0610 09:53:29.579328    3202 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:53:29.579337    3202 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:53:29.581680    3202 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:53:29.584321    3202 out.go:177] * Automatically selected the socket_vmnet network
	I0610 09:53:29.587307    3202 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 09:53:29.587339    3202 cni.go:84] Creating CNI manager for ""
	I0610 09:53:29.587345    3202 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:53:29.587349    3202 start_flags.go:319] config:
	{Name:ingress-addon-legacy-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0610 09:53:29.587447    3202 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:53:29.594307    3202 out.go:177] * Starting control plane node ingress-addon-legacy-659000 in cluster ingress-addon-legacy-659000
	I0610 09:53:29.598322    3202 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 09:53:29.801353    3202 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0610 09:53:29.801449    3202 cache.go:57] Caching tarball of preloaded images
	I0610 09:53:29.802105    3202 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 09:53:29.807198    3202 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0610 09:53:29.815059    3202 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:53:30.027866    3202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0610 09:53:45.768061    3202 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:53:45.768207    3202 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:53:46.516509    3202 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0610 09:53:46.516688    3202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/config.json ...
	I0610 09:53:46.516710    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/config.json: {Name:mka25cd706a1162759e791085f61d1a8bf7bb62d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:53:46.516922    3202 cache.go:195] Successfully downloaded all kic artifacts
	I0610 09:53:46.516935    3202 start.go:364] acquiring machines lock for ingress-addon-legacy-659000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 09:53:46.516964    3202 start.go:368] acquired machines lock for "ingress-addon-legacy-659000" in 25.667µs
	I0610 09:53:46.516992    3202 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:53:46.517033    3202 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 09:53:46.522024    3202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0610 09:53:46.544790    3202 start.go:159] libmachine.API.Create for "ingress-addon-legacy-659000" (driver="qemu2")
	I0610 09:53:46.544825    3202 client.go:168] LocalClient.Create starting
	I0610 09:53:46.544959    3202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 09:53:46.544999    3202 main.go:141] libmachine: Decoding PEM data...
	I0610 09:53:46.545009    3202 main.go:141] libmachine: Parsing certificate...
	I0610 09:53:46.545056    3202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 09:53:46.545074    3202 main.go:141] libmachine: Decoding PEM data...
	I0610 09:53:46.545081    3202 main.go:141] libmachine: Parsing certificate...
	I0610 09:53:46.545407    3202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 09:53:46.877087    3202 main.go:141] libmachine: Creating SSH key...
	I0610 09:53:46.982413    3202 main.go:141] libmachine: Creating Disk image...
	I0610 09:53:46.982418    3202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 09:53:46.982565    3202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2
	I0610 09:53:46.998855    3202 main.go:141] libmachine: STDOUT: 
	I0610 09:53:46.998867    3202 main.go:141] libmachine: STDERR: 
	I0610 09:53:46.998918    3202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2 +20000M
	I0610 09:53:47.005944    3202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 09:53:47.005957    3202 main.go:141] libmachine: STDERR: 
	I0610 09:53:47.005973    3202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2
	I0610 09:53:47.005978    3202 main.go:141] libmachine: Starting QEMU VM...
	I0610 09:53:47.006010    3202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:7b:35:34:1f:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/disk.qcow2
	I0610 09:53:47.051341    3202 main.go:141] libmachine: STDOUT: 
	I0610 09:53:47.051381    3202 main.go:141] libmachine: STDERR: 
	I0610 09:53:47.051386    3202 main.go:141] libmachine: Attempt 0
	I0610 09:53:47.051400    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:47.051464    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:47.051481    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:47.051491    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:47.051497    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:47.051502    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:49.053668    3202 main.go:141] libmachine: Attempt 1
	I0610 09:53:49.053739    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:49.054117    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:49.054173    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:49.054240    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:49.054274    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:49.054306    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:51.056452    3202 main.go:141] libmachine: Attempt 2
	I0610 09:53:51.056480    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:51.056604    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:51.056616    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:51.056633    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:51.056638    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:51.056643    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:53.058656    3202 main.go:141] libmachine: Attempt 3
	I0610 09:53:53.058665    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:53.058699    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:53.058706    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:53.058712    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:53.058717    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:53.058723    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:55.060777    3202 main.go:141] libmachine: Attempt 4
	I0610 09:53:55.060805    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:55.060845    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:55.060852    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:55.060861    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:55.060867    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:55.060873    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:57.062888    3202 main.go:141] libmachine: Attempt 5
	I0610 09:53:57.062906    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:57.062972    3202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 09:53:57.062986    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:46:a6:dd:cb:1f:6f ID:1,46:a6:dd:cb:1f:6f Lease:0x6485fbf3}
	I0610 09:53:57.062992    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:26:b9:c5:d1:d3:24 ID:1,26:b9:c5:d1:d3:24 Lease:0x6485fb34}
	I0610 09:53:57.062996    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ca:2d:ad:bf:4e:83 ID:1,ca:2d:ad:bf:4e:83 Lease:0x6484a9a7}
	I0610 09:53:57.063001    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:c2:e2:60:7a:4e:46 ID:1,c2:e2:60:7a:4e:46 Lease:0x6484a985}
	I0610 09:53:59.065034    3202 main.go:141] libmachine: Attempt 6
	I0610 09:53:59.065078    3202 main.go:141] libmachine: Searching for 46:7b:35:34:1f:17 in /var/db/dhcpd_leases ...
	I0610 09:53:59.065215    3202 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0610 09:53:59.065226    3202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:46:7b:35:34:1f:17 ID:1,46:7b:35:34:1f:17 Lease:0x6485fc25}
	I0610 09:53:59.065233    3202 main.go:141] libmachine: Found match: 46:7b:35:34:1f:17
	I0610 09:53:59.065243    3202 main.go:141] libmachine: IP: 192.168.105.6
	I0610 09:53:59.065249    3202 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0610 09:54:01.086425    3202 machine.go:88] provisioning docker machine ...
	I0610 09:54:01.086496    3202 buildroot.go:166] provisioning hostname "ingress-addon-legacy-659000"
	I0610 09:54:01.086746    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:01.087722    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:01.087748    3202 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-659000 && echo "ingress-addon-legacy-659000" | sudo tee /etc/hostname
	I0610 09:54:01.183996    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-659000
	
	I0610 09:54:01.184139    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:01.184639    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:01.184661    3202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-659000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-659000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-659000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 09:54:01.259132    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 09:54:01.259150    3202 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16578-1150/.minikube CaCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16578-1150/.minikube}
	I0610 09:54:01.259162    3202 buildroot.go:174] setting up certificates
	I0610 09:54:01.259170    3202 provision.go:83] configureAuth start
	I0610 09:54:01.259175    3202 provision.go:138] copyHostCerts
	I0610 09:54:01.259228    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem
	I0610 09:54:01.259305    3202 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem, removing ...
	I0610 09:54:01.259312    3202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem
	I0610 09:54:01.259533    3202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.pem (1078 bytes)
	I0610 09:54:01.259811    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem
	I0610 09:54:01.259852    3202 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem, removing ...
	I0610 09:54:01.259856    3202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem
	I0610 09:54:01.259923    3202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/cert.pem (1123 bytes)
	I0610 09:54:01.260056    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem
	I0610 09:54:01.260097    3202 exec_runner.go:144] found /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem, removing ...
	I0610 09:54:01.260102    3202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem
	I0610 09:54:01.260159    3202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16578-1150/.minikube/key.pem (1679 bytes)
	I0610 09:54:01.260283    3202 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-659000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-659000]
	I0610 09:54:01.457802    3202 provision.go:172] copyRemoteCerts
	I0610 09:54:01.457856    3202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 09:54:01.457865    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:01.490785    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 09:54:01.490839    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 09:54:01.497684    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 09:54:01.497726    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0610 09:54:01.504216    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 09:54:01.504253    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 09:54:01.511176    3202 provision.go:86] duration metric: configureAuth took 252.005375ms
	I0610 09:54:01.511184    3202 buildroot.go:189] setting minikube options for container-runtime
	I0610 09:54:01.511286    3202 config.go:182] Loaded profile config "ingress-addon-legacy-659000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0610 09:54:01.511322    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:01.511543    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:01.511548    3202 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 09:54:01.568320    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 09:54:01.568327    3202 buildroot.go:70] root file system type: tmpfs
	I0610 09:54:01.568393    3202 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 09:54:01.568440    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:01.568687    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:01.568723    3202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 09:54:01.633078    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 09:54:01.633129    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:01.633373    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:01.633382    3202 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 09:54:02.010955    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 09:54:02.010975    3202 machine.go:91] provisioned docker machine in 924.530708ms
	I0610 09:54:02.010985    3202 client.go:171] LocalClient.Create took 15.466385208s
	I0610 09:54:02.011003    3202 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-659000" took 15.466445791s
	I0610 09:54:02.011010    3202 start.go:300] post-start starting for "ingress-addon-legacy-659000" (driver="qemu2")
	I0610 09:54:02.011013    3202 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 09:54:02.011087    3202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 09:54:02.011100    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:02.042791    3202 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 09:54:02.044301    3202 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 09:54:02.044310    3202 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/addons for local assets ...
	I0610 09:54:02.044380    3202 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16578-1150/.minikube/files for local assets ...
	I0610 09:54:02.044487    3202 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem -> 15642.pem in /etc/ssl/certs
	I0610 09:54:02.044493    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem -> /etc/ssl/certs/15642.pem
	I0610 09:54:02.044596    3202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 09:54:02.047574    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem --> /etc/ssl/certs/15642.pem (1708 bytes)
	I0610 09:54:02.054943    3202 start.go:303] post-start completed in 43.928125ms
	I0610 09:54:02.055342    3202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/config.json ...
	I0610 09:54:02.055506    3202 start.go:128] duration metric: createHost completed in 15.538700584s
	I0610 09:54:02.055531    3202 main.go:141] libmachine: Using SSH client type: native
	I0610 09:54:02.055755    3202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10112c6d0] 0x10112f130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 09:54:02.055759    3202 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 09:54:02.110003    3202 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686416041.703446210
	
	I0610 09:54:02.110009    3202 fix.go:207] guest clock: 1686416041.703446210
	I0610 09:54:02.110013    3202 fix.go:220] Guest: 2023-06-10 09:54:01.70344621 -0700 PDT Remote: 2023-06-10 09:54:02.05551 -0700 PDT m=+32.554127126 (delta=-352.06379ms)
	I0610 09:54:02.110027    3202 fix.go:191] guest clock delta is within tolerance: -352.06379ms
	I0610 09:54:02.110030    3202 start.go:83] releasing machines lock for "ingress-addon-legacy-659000", held for 15.593286167s
	I0610 09:54:02.110283    3202 ssh_runner.go:195] Run: cat /version.json
	I0610 09:54:02.110293    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:02.110305    3202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 09:54:02.110321    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:02.182471    3202 ssh_runner.go:195] Run: systemctl --version
	I0610 09:54:02.184442    3202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 09:54:02.186273    3202 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 09:54:02.186303    3202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 09:54:02.189505    3202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 09:54:02.194538    3202 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 09:54:02.194544    3202 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 09:54:02.194613    3202 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:54:02.203844    3202 docker.go:633] Got preloaded images: 
	I0610 09:54:02.203855    3202 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 09:54:02.203921    3202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:54:02.207478    3202 ssh_runner.go:195] Run: which lz4
	I0610 09:54:02.208835    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0610 09:54:02.208933    3202 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:54:02.210187    3202 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:54:02.210201    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0610 09:54:03.916111    3202 docker.go:597] Took 1.707248 seconds to copy over tarball
	I0610 09:54:03.916166    3202 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:54:05.218705    3202 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.302545542s)
	I0610 09:54:05.218719    3202 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:54:05.241460    3202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:54:05.246943    3202 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0610 09:54:05.256446    3202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:54:05.335406    3202 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:54:06.889075    3202 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.553677583s)
	I0610 09:54:06.889097    3202 start.go:481] detecting cgroup driver to use...
	I0610 09:54:06.889170    3202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:54:06.894371    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0610 09:54:06.897207    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 09:54:06.900056    3202 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 09:54:06.900081    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 09:54:06.903426    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:54:06.906803    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 09:54:06.910002    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 09:54:06.912775    3202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 09:54:06.915828    3202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 09:54:06.919350    3202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 09:54:06.922580    3202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 09:54:06.925261    3202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:54:07.006445    3202 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 09:54:07.013839    3202 start.go:481] detecting cgroup driver to use...
	I0610 09:54:07.013914    3202 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 09:54:07.019283    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:54:07.024135    3202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 09:54:07.029790    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 09:54:07.033934    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:54:07.038150    3202 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 09:54:07.075223    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 09:54:07.080549    3202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 09:54:07.085844    3202 ssh_runner.go:195] Run: which cri-dockerd
	I0610 09:54:07.087142    3202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 09:54:07.090148    3202 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 09:54:07.095277    3202 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 09:54:07.178243    3202 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 09:54:07.245232    3202 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 09:54:07.245245    3202 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 09:54:07.250279    3202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:54:07.328663    3202 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:54:08.497802    3202 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.169140958s)
	I0610 09:54:08.497872    3202 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:54:08.505459    3202 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 09:54:08.519106    3202 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0610 09:54:08.519268    3202 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 09:54:08.520685    3202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:54:08.524368    3202 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 09:54:08.524415    3202 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:54:08.530220    3202 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0610 09:54:08.530226    3202 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 09:54:08.530269    3202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:54:08.533204    3202 ssh_runner.go:195] Run: which lz4
	I0610 09:54:08.534394    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0610 09:54:08.534484    3202 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 09:54:08.535687    3202 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 09:54:08.535698    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0610 09:54:10.201194    3202 docker.go:597] Took 1.666779 seconds to copy over tarball
	I0610 09:54:10.201258    3202 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 09:54:11.504364    3202 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.30311225s)
	I0610 09:54:11.504376    3202 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 09:54:11.523731    3202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 09:54:11.527252    3202 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0610 09:54:11.532167    3202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 09:54:11.609079    3202 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 09:54:13.191394    3202 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.582322416s)
	I0610 09:54:13.191475    3202 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 09:54:13.197304    3202 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0610 09:54:13.197312    3202 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 09:54:13.197315    3202 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 09:54:13.206910    3202 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:13.207003    3202 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0610 09:54:13.207645    3202 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 09:54:13.207713    3202 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 09:54:13.207763    3202 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 09:54:13.207946    3202 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 09:54:13.208051    3202 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 09:54:13.208881    3202 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0610 09:54:13.217196    3202 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 09:54:13.218297    3202 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0610 09:54:13.218423    3202 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 09:54:13.218455    3202 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 09:54:13.218571    3202 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:13.218679    3202 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 09:54:13.218745    3202 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0610 09:54:13.219044    3202 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 09:54:14.529460    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 09:54:14.535958    3202 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0610 09:54:14.535984    3202 docker.go:313] Removing image: registry.k8s.io/pause:3.2
	I0610 09:54:14.536021    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	W0610 09:54:14.536723    3202 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:14.536818    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0610 09:54:14.542397    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0610 09:54:14.544302    3202 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0610 09:54:14.544324    3202 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 09:54:14.544365    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0610 09:54:14.550386    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0610 09:54:14.551553    3202 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:14.551644    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0610 09:54:14.558126    3202 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0610 09:54:14.558152    3202 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.7
	I0610 09:54:14.558198    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0610 09:54:14.564029    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0610 09:54:14.671635    3202 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:14.671751    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0610 09:54:14.677641    3202 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0610 09:54:14.677668    3202 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 09:54:14.677716    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0610 09:54:14.689574    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0610 09:54:14.877961    3202 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:14.878135    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0610 09:54:14.883711    3202 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0610 09:54:14.883730    3202 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 09:54:14.883774    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0610 09:54:14.888972    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0610 09:54:15.098000    3202 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:15.098339    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0610 09:54:15.115214    3202 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0610 09:54:15.115260    3202 docker.go:313] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0610 09:54:15.115362    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0610 09:54:15.126914    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0610 09:54:15.238378    3202 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:15.238868    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:15.262208    3202 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 09:54:15.262279    3202 docker.go:313] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:15.262441    3202 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:15.288665    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0610 09:54:15.320243    3202 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 09:54:15.320479    3202 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 09:54:15.333178    3202 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0610 09:54:15.333226    3202 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 09:54:15.333306    3202 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 09:54:15.343072    3202 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0610 09:54:15.343132    3202 cache_images.go:92] LoadImages completed in 2.14584175s
	W0610 09:54:15.343207    3202 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0610 09:54:15.343283    3202 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 09:54:15.356023    3202 cni.go:84] Creating CNI manager for ""
	I0610 09:54:15.356035    3202 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:54:15.356052    3202 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 09:54:15.356071    3202 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-659000 NodeName:ingress-addon-legacy-659000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 09:54:15.356189    3202 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-659000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 09:54:15.356242    3202 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-659000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 09:54:15.356317    3202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0610 09:54:15.360349    3202 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 09:54:15.360391    3202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 09:54:15.364065    3202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0610 09:54:15.370242    3202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0610 09:54:15.375667    3202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0610 09:54:15.381329    3202 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0610 09:54:15.382619    3202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 09:54:15.386585    3202 certs.go:56] Setting up /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000 for IP: 192.168.105.6
	I0610 09:54:15.386595    3202 certs.go:190] acquiring lock for shared ca certs: {Name:mk0fe201bc13e6f12e399f6d97e7f5aaea92ff32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.386926    3202 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key
	I0610 09:54:15.387071    3202 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key
	I0610 09:54:15.387096    3202 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key
	I0610 09:54:15.387102    3202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt with IP's: []
	I0610 09:54:15.532173    3202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt ...
	I0610 09:54:15.532179    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: {Name:mkccede93e6296ef8c47def9f82fa827b8b03ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.532419    3202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key ...
	I0610 09:54:15.532426    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key: {Name:mk66c4df77730bc7e7e705549765731a51700c8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.532556    3202 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key.b354f644
	I0610 09:54:15.532564    3202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 09:54:15.646912    3202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt.b354f644 ...
	I0610 09:54:15.646916    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt.b354f644: {Name:mkd4b65af3500c24e11b1950ec65a426135eabb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.647058    3202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key.b354f644 ...
	I0610 09:54:15.647061    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key.b354f644: {Name:mk3e4e1f484f5ad73c868efc1de55d706aff70df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.647177    3202 certs.go:337] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt
	I0610 09:54:15.647376    3202 certs.go:341] copying /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key
	I0610 09:54:15.647484    3202 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.key
	I0610 09:54:15.647494    3202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.crt with IP's: []
	I0610 09:54:15.711144    3202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.crt ...
	I0610 09:54:15.711147    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.crt: {Name:mk2dc51b4ddab2f2862d2d19946a3313b18f8b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.711265    3202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.key ...
	I0610 09:54:15.711268    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.key: {Name:mk902a164e6c47fb36c6abf7936d10b4dfb655af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:15.711379    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 09:54:15.711397    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 09:54:15.711412    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 09:54:15.711427    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 09:54:15.711440    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 09:54:15.711455    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 09:54:15.711469    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 09:54:15.711480    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 09:54:15.711553    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564.pem (1338 bytes)
	W0610 09:54:15.711924    3202 certs.go:433] ignoring /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564_empty.pem, impossibly tiny 0 bytes
	I0610 09:54:15.711932    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 09:54:15.711961    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem (1078 bytes)
	I0610 09:54:15.711981    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem (1123 bytes)
	I0610 09:54:15.712006    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/certs/key.pem (1679 bytes)
	I0610 09:54:15.712057    3202 certs.go:437] found cert: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem (1708 bytes)
	I0610 09:54:15.712080    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem -> /usr/share/ca-certificates/15642.pem
	I0610 09:54:15.712092    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:54:15.712102    3202 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564.pem -> /usr/share/ca-certificates/1564.pem
	I0610 09:54:15.712501    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 09:54:15.719964    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 09:54:15.727295    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 09:54:15.734693    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 09:54:15.741776    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 09:54:15.748601    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 09:54:15.755399    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 09:54:15.762646    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 09:54:15.769601    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/ssl/certs/15642.pem --> /usr/share/ca-certificates/15642.pem (1708 bytes)
	I0610 09:54:15.776137    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 09:54:15.783246    3202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/1564.pem --> /usr/share/ca-certificates/1564.pem (1338 bytes)
	I0610 09:54:15.790077    3202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 09:54:15.794996    3202 ssh_runner.go:195] Run: openssl version
	I0610 09:54:15.796994    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 09:54:15.800019    3202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:54:15.801387    3202 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:54:15.801413    3202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 09:54:15.803068    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 09:54:15.806248    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1564.pem && ln -fs /usr/share/ca-certificates/1564.pem /etc/ssl/certs/1564.pem"
	I0610 09:54:15.809059    3202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1564.pem
	I0610 09:54:15.810456    3202 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 16:49 /usr/share/ca-certificates/1564.pem
	I0610 09:54:15.810477    3202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1564.pem
	I0610 09:54:15.812221    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1564.pem /etc/ssl/certs/51391683.0"
	I0610 09:54:15.815571    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15642.pem && ln -fs /usr/share/ca-certificates/15642.pem /etc/ssl/certs/15642.pem"
	I0610 09:54:15.818887    3202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15642.pem
	I0610 09:54:15.820388    3202 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 16:49 /usr/share/ca-certificates/15642.pem
	I0610 09:54:15.820405    3202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15642.pem
	I0610 09:54:15.822199    3202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15642.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 09:54:15.825032    3202 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 09:54:15.826210    3202 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 09:54:15.826237    3202 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:54:15.826302    3202 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 09:54:15.831689    3202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 09:54:15.835234    3202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 09:54:15.838429    3202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 09:54:15.841037    3202 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 09:54:15.841056    3202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0610 09:54:15.867130    3202 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0610 09:54:15.867231    3202 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 09:54:15.949176    3202 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 09:54:15.949268    3202 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 09:54:15.949318    3202 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 09:54:15.995256    3202 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 09:54:15.995760    3202 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 09:54:15.995823    3202 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 09:54:16.085085    3202 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 09:54:16.092333    3202 out.go:204]   - Generating certificates and keys ...
	I0610 09:54:16.092372    3202 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 09:54:16.092401    3202 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 09:54:16.191139    3202 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 09:54:16.244973    3202 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 09:54:16.466880    3202 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 09:54:16.576584    3202 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 09:54:16.744698    3202 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 09:54:16.744766    3202 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-659000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0610 09:54:16.806259    3202 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 09:54:16.806336    3202 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-659000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0610 09:54:16.896559    3202 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 09:54:17.013711    3202 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 09:54:17.114240    3202 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 09:54:17.114274    3202 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 09:54:17.244508    3202 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 09:54:17.293708    3202 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 09:54:17.336035    3202 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 09:54:17.388846    3202 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 09:54:17.389128    3202 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 09:54:17.393222    3202 out.go:204]   - Booting up control plane ...
	I0610 09:54:17.393290    3202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 09:54:17.393332    3202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 09:54:17.393371    3202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 09:54:17.393412    3202 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 09:54:17.403963    3202 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 09:54:28.906541    3202 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502020 seconds
	I0610 09:54:28.906728    3202 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 09:54:28.917106    3202 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 09:54:29.445947    3202 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 09:54:29.446159    3202 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-659000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0610 09:54:29.951915    3202 kubeadm.go:322] [bootstrap-token] Using token: 68jcvq.4glvteqt32opsmfh
	I0610 09:54:29.958321    3202 out.go:204]   - Configuring RBAC rules ...
	I0610 09:54:29.958417    3202 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 09:54:29.958482    3202 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 09:54:29.963476    3202 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 09:54:29.964449    3202 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 09:54:29.965312    3202 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 09:54:29.966243    3202 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 09:54:29.970914    3202 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 09:54:30.157363    3202 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 09:54:30.362522    3202 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 09:54:30.363085    3202 kubeadm.go:322] 
	I0610 09:54:30.363131    3202 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 09:54:30.363136    3202 kubeadm.go:322] 
	I0610 09:54:30.363200    3202 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 09:54:30.363208    3202 kubeadm.go:322] 
	I0610 09:54:30.363225    3202 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 09:54:30.363278    3202 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 09:54:30.363317    3202 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 09:54:30.363329    3202 kubeadm.go:322] 
	I0610 09:54:30.363383    3202 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 09:54:30.363457    3202 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 09:54:30.363506    3202 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 09:54:30.363512    3202 kubeadm.go:322] 
	I0610 09:54:30.363579    3202 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 09:54:30.363637    3202 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 09:54:30.363642    3202 kubeadm.go:322] 
	I0610 09:54:30.363705    3202 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 68jcvq.4glvteqt32opsmfh \
	I0610 09:54:30.363778    3202 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 \
	I0610 09:54:30.363799    3202 kubeadm.go:322]     --control-plane 
	I0610 09:54:30.363825    3202 kubeadm.go:322] 
	I0610 09:54:30.363900    3202 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 09:54:30.363907    3202 kubeadm.go:322] 
	I0610 09:54:30.363966    3202 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 68jcvq.4glvteqt32opsmfh \
	I0610 09:54:30.364051    3202 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c155ccab511590e7868af4e2534dd51060b6f1b14354fb768975a4171970b2f2 
	I0610 09:54:30.364347    3202 kubeadm.go:322] W0610 16:54:15.460625    1609 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0610 09:54:30.364493    3202 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0610 09:54:30.364590    3202 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0610 09:54:30.364676    3202 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 09:54:30.364767    3202 kubeadm.go:322] W0610 16:54:16.985737    1609 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 09:54:30.364857    3202 kubeadm.go:322] W0610 16:54:16.986151    1609 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 09:54:30.364865    3202 cni.go:84] Creating CNI manager for ""
	I0610 09:54:30.364874    3202 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:54:30.364889    3202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 09:54:30.364970    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:30.364971    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5 minikube.k8s.io/name=ingress-addon-legacy-659000 minikube.k8s.io/updated_at=2023_06_10T09_54_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:30.370464    3202 ops.go:34] apiserver oom_adj: -16
	I0610 09:54:30.435828    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:30.971889    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:31.471873    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:31.971653    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:32.471894    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:32.971617    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:33.471767    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:33.971858    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:34.470595    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:34.971880    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:35.471538    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:35.971799    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:36.471767    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:36.970452    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:37.471702    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:37.971805    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:38.471535    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:38.971773    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:39.471729    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:39.971752    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:40.471652    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:40.971720    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:41.471704    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:41.971708    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:42.471686    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:42.971640    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:43.471641    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:43.971682    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:44.471663    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:44.971508    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:45.471545    3202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 09:54:45.551463    3202 kubeadm.go:1076] duration metric: took 15.186786583s to wait for elevateKubeSystemPrivileges.
	I0610 09:54:45.551477    3202 kubeadm.go:406] StartCluster complete in 29.725682459s
	I0610 09:54:45.551486    3202 settings.go:142] acquiring lock: {Name:mk6eef4f6d8f32005bb3baac4caf84efe88ae2c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:45.551577    3202 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:54:45.552101    3202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/kubeconfig: {Name:mk43e1f9099026f94c69e1d46254f04b709c9ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:54:45.552270    3202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 09:54:45.552328    3202 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 09:54:45.552359    3202 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-659000"
	I0610 09:54:45.552363    3202 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-659000"
	I0610 09:54:45.552369    3202 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-659000"
	I0610 09:54:45.552370    3202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-659000"
	I0610 09:54:45.552395    3202 host.go:66] Checking if "ingress-addon-legacy-659000" exists ...
	I0610 09:54:45.552478    3202 config.go:182] Loaded profile config "ingress-addon-legacy-659000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0610 09:54:45.552570    3202 kapi.go:59] client config for ingress-addon-legacy-659000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102183510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:54:45.552957    3202 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 09:54:45.553297    3202 kapi.go:59] client config for ingress-addon-legacy-659000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102183510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:54:45.557777    3202 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 09:54:45.559031    3202 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-659000"
	I0610 09:54:45.560816    3202 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:54:45.560823    3202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 09:54:45.560832    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:45.560838    3202 host.go:66] Checking if "ingress-addon-legacy-659000" exists ...
	I0610 09:54:45.561510    3202 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 09:54:45.561514    3202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 09:54:45.561517    3202 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/ingress-addon-legacy-659000/id_rsa Username:docker}
	I0610 09:54:45.609099    3202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 09:54:45.615394    3202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 09:54:45.667590    3202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 09:54:45.790558    3202 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 09:54:45.864542    3202 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 09:54:45.872532    3202 addons.go:499] enable addons completed in 320.203375ms: enabled=[default-storageclass storage-provisioner]
	I0610 09:54:46.071227    3202 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-659000" context rescaled to 1 replicas
	I0610 09:54:46.071248    3202 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 09:54:46.074649    3202 out.go:177] * Verifying Kubernetes components...
	I0610 09:54:46.082558    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:54:46.088425    3202 kapi.go:59] client config for ingress-addon-legacy-659000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.key", CAFile:"/Users/jenkins/minikube-integration/16578-1150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102183510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 09:54:46.088561    3202 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-659000" to be "Ready" ...
	I0610 09:54:46.090254    3202 node_ready.go:49] node "ingress-addon-legacy-659000" has status "Ready":"True"
	I0610 09:54:46.090259    3202 node_ready.go:38] duration metric: took 1.691291ms waiting for node "ingress-addon-legacy-659000" to be "Ready" ...
	I0610 09:54:46.090263    3202 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:54:46.094439    3202 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2x26t" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:48.111833    3202 pod_ready.go:102] pod "coredns-66bff467f8-2x26t" in "kube-system" namespace has status "Ready":"False"
	I0610 09:54:50.607121    3202 pod_ready.go:102] pod "coredns-66bff467f8-2x26t" in "kube-system" namespace has status "Ready":"False"
	I0610 09:54:52.608386    3202 pod_ready.go:102] pod "coredns-66bff467f8-2x26t" in "kube-system" namespace has status "Ready":"False"
	I0610 09:54:54.114584    3202 pod_ready.go:92] pod "coredns-66bff467f8-2x26t" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.114623    3202 pod_ready.go:81] duration metric: took 8.020292s waiting for pod "coredns-66bff467f8-2x26t" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.114641    3202 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.126748    3202 pod_ready.go:92] pod "etcd-ingress-addon-legacy-659000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.126768    3202 pod_ready.go:81] duration metric: took 12.117833ms waiting for pod "etcd-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.126782    3202 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.132428    3202 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-659000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.132444    3202 pod_ready.go:81] duration metric: took 5.654417ms waiting for pod "kube-apiserver-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.132454    3202 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.137205    3202 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-659000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.137216    3202 pod_ready.go:81] duration metric: took 4.753541ms waiting for pod "kube-controller-manager-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.137228    3202 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h76br" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.141838    3202 pod_ready.go:92] pod "kube-proxy-h76br" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.141850    3202 pod_ready.go:81] duration metric: took 4.61525ms waiting for pod "kube-proxy-h76br" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.141857    3202 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.299688    3202 request.go:628] Waited for 157.74075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-659000
	I0610 09:54:54.499668    3202 request.go:628] Waited for 192.828708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-659000
	I0610 09:54:54.506770    3202 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-659000" in "kube-system" namespace has status "Ready":"True"
	I0610 09:54:54.506804    3202 pod_ready.go:81] duration metric: took 364.941667ms waiting for pod "kube-scheduler-ingress-addon-legacy-659000" in "kube-system" namespace to be "Ready" ...
	I0610 09:54:54.506837    3202 pod_ready.go:38] duration metric: took 8.416690417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 09:54:54.506887    3202 api_server.go:52] waiting for apiserver process to appear ...
	I0610 09:54:54.507215    3202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 09:54:54.525903    3202 api_server.go:72] duration metric: took 8.4547415s to wait for apiserver process to appear ...
	I0610 09:54:54.525930    3202 api_server.go:88] waiting for apiserver healthz status ...
	I0610 09:54:54.525953    3202 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0610 09:54:54.535502    3202 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0610 09:54:54.536521    3202 api_server.go:141] control plane version: v1.18.20
	I0610 09:54:54.536540    3202 api_server.go:131] duration metric: took 10.599042ms to wait for apiserver health ...
	I0610 09:54:54.536548    3202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 09:54:54.699704    3202 request.go:628] Waited for 163.037542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0610 09:54:54.713036    3202 system_pods.go:59] 7 kube-system pods found
	I0610 09:54:54.713086    3202 system_pods.go:61] "coredns-66bff467f8-2x26t" [da7772c9-c4f9-4673-af87-3d354cd9c8d0] Running
	I0610 09:54:54.713097    3202 system_pods.go:61] "etcd-ingress-addon-legacy-659000" [c987401c-b242-4a6b-b78e-2d9681a10c9e] Running
	I0610 09:54:54.713110    3202 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-659000" [1b6fd0d2-0560-4a40-8700-db66e70e89f3] Running
	I0610 09:54:54.713123    3202 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-659000" [54bc1a49-0e49-4c35-8348-706b7c4cb5f3] Running
	I0610 09:54:54.713135    3202 system_pods.go:61] "kube-proxy-h76br" [5e3b53c5-0e7d-4a7f-8f31-364ed37af843] Running
	I0610 09:54:54.713145    3202 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-659000" [77a2c6f8-9aef-4670-b268-ea5cf30ada3a] Running
	I0610 09:54:54.713155    3202 system_pods.go:61] "storage-provisioner" [fe5f8624-a8a2-46fb-89f0-e63e0c782229] Running
	I0610 09:54:54.713170    3202 system_pods.go:74] duration metric: took 176.616583ms to wait for pod list to return data ...
	I0610 09:54:54.713185    3202 default_sa.go:34] waiting for default service account to be created ...
	I0610 09:54:54.899641    3202 request.go:628] Waited for 186.306084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0610 09:54:54.905575    3202 default_sa.go:45] found service account: "default"
	I0610 09:54:54.905609    3202 default_sa.go:55] duration metric: took 192.413833ms for default service account to be created ...
	I0610 09:54:54.905629    3202 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 09:54:55.098828    3202 request.go:628] Waited for 193.05675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0610 09:54:55.108934    3202 system_pods.go:86] 7 kube-system pods found
	I0610 09:54:55.108971    3202 system_pods.go:89] "coredns-66bff467f8-2x26t" [da7772c9-c4f9-4673-af87-3d354cd9c8d0] Running
	I0610 09:54:55.108979    3202 system_pods.go:89] "etcd-ingress-addon-legacy-659000" [c987401c-b242-4a6b-b78e-2d9681a10c9e] Running
	I0610 09:54:55.108987    3202 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-659000" [1b6fd0d2-0560-4a40-8700-db66e70e89f3] Running
	I0610 09:54:55.108993    3202 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-659000" [54bc1a49-0e49-4c35-8348-706b7c4cb5f3] Running
	I0610 09:54:55.109000    3202 system_pods.go:89] "kube-proxy-h76br" [5e3b53c5-0e7d-4a7f-8f31-364ed37af843] Running
	I0610 09:54:55.109007    3202 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-659000" [77a2c6f8-9aef-4670-b268-ea5cf30ada3a] Running
	I0610 09:54:55.109015    3202 system_pods.go:89] "storage-provisioner" [fe5f8624-a8a2-46fb-89f0-e63e0c782229] Running
	I0610 09:54:55.109026    3202 system_pods.go:126] duration metric: took 203.391417ms to wait for k8s-apps to be running ...
	I0610 09:54:55.109041    3202 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 09:54:55.109314    3202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 09:54:55.123246    3202 system_svc.go:56] duration metric: took 14.203042ms WaitForService to wait for kubelet.
	I0610 09:54:55.123275    3202 kubeadm.go:581] duration metric: took 9.052122625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 09:54:55.123301    3202 node_conditions.go:102] verifying NodePressure condition ...
	I0610 09:54:55.299668    3202 request.go:628] Waited for 176.255542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0610 09:54:55.308326    3202 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 09:54:55.308383    3202 node_conditions.go:123] node cpu capacity is 2
	I0610 09:54:55.308406    3202 node_conditions.go:105] duration metric: took 185.100625ms to run NodePressure ...
	I0610 09:54:55.308423    3202 start.go:228] waiting for startup goroutines ...
	I0610 09:54:55.308439    3202 start.go:233] waiting for cluster config update ...
	I0610 09:54:55.308460    3202 start.go:242] writing updated cluster config ...
	I0610 09:54:55.309764    3202 ssh_runner.go:195] Run: rm -f paused
	I0610 09:54:55.451888    3202 start.go:573] kubectl: 1.25.9, cluster: 1.18.20 (minor skew: 7)
	I0610 09:54:55.454647    3202 out.go:177] 
	W0610 09:54:55.458767    3202 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.18.20.
	I0610 09:54:55.462739    3202 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0610 09:54:55.470664    3202 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-659000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 16:53:57 UTC, ends at Sat 2023-06-10 16:56:04 UTC. --
	Jun 10 16:55:34 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:34.302405709Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:55:46 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:46.661185865Z" level=info msg="ignoring event" container=2d3aff7f3c74cbcbcdb906149469f7e23867dbb9f35a9b03af5f76d0116e70cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:55:46 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:46.661627155Z" level=info msg="shim disconnected" id=2d3aff7f3c74cbcbcdb906149469f7e23867dbb9f35a9b03af5f76d0116e70cb namespace=moby
	Jun 10 16:55:46 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:46.661712738Z" level=warning msg="cleaning up after shim disconnected" id=2d3aff7f3c74cbcbcdb906149469f7e23867dbb9f35a9b03af5f76d0116e70cb namespace=moby
	Jun 10 16:55:46 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:46.661721155Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.679056525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.679127816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.679143566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.679154483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:51.726331323Z" level=info msg="ignoring event" container=5ec2423683c9836af3b58fbc3f51e37b3dc4b0c5b60d7871ba4dfb54e60edc4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.726431781Z" level=info msg="shim disconnected" id=5ec2423683c9836af3b58fbc3f51e37b3dc4b0c5b60d7871ba4dfb54e60edc4d namespace=moby
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.726457698Z" level=warning msg="cleaning up after shim disconnected" id=5ec2423683c9836af3b58fbc3f51e37b3dc4b0c5b60d7871ba4dfb54e60edc4d namespace=moby
	Jun 10 16:55:51 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:51.726462198Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:59.088393504Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=5c79b59a566c46859174b96a577f2b5109cf04d5b57ad8191962c8accb7ff423
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:59.093540577Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=5c79b59a566c46859174b96a577f2b5109cf04d5b57ad8191962c8accb7ff423
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:59.173276583Z" level=info msg="ignoring event" container=5c79b59a566c46859174b96a577f2b5109cf04d5b57ad8191962c8accb7ff423 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.173442374Z" level=info msg="shim disconnected" id=5c79b59a566c46859174b96a577f2b5109cf04d5b57ad8191962c8accb7ff423 namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.173518333Z" level=warning msg="cleaning up after shim disconnected" id=5c79b59a566c46859174b96a577f2b5109cf04d5b57ad8191962c8accb7ff423 namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.173529999Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.188023137Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:55:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1267]: time="2023-06-10T16:55:59.219030407Z" level=info msg="ignoring event" container=46f2784546e8ed9d71e6f0ab99690547791bd547e216cca18382fa39c5cce69d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.219208532Z" level=info msg="shim disconnected" id=46f2784546e8ed9d71e6f0ab99690547791bd547e216cca18382fa39c5cce69d namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.219245949Z" level=warning msg="cleaning up after shim disconnected" id=46f2784546e8ed9d71e6f0ab99690547791bd547e216cca18382fa39c5cce69d namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.219250699Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 16:55:59 ingress-addon-legacy-659000 dockerd[1275]: time="2023-06-10T16:55:59.223966397Z" level=warning msg="cleanup warnings time=\"2023-06-10T16:55:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	5ec2423683c98       13753a81eccfd                                                                                                      13 seconds ago       Exited              hello-world-app           2                   bece88fe995a3
	ce21c4b592a22       nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90                                      41 seconds ago       Running             nginx                     0                   f033798982a5d
	5c79b59a566c4       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   46f2784546e8e
	abf0528b4bfaa       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   54928d0cf5695
	8a878c6a84a31       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   648df9d06e682
	272cb6f9abaa4       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   c9320b7209eac
	c80e0f5b7387d       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   ecbbff62cba6c
	7eee54c94ef78       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   c3210de8298b3
	2f7068c1943e9       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   76fa6a4e2a766
	641c636cfea91       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   e1e3baee108af
	4e15caa6d6081       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   73f392e0a7d96
	f60c03150b914       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   18eca3cda4278
	
	* 
	* ==> coredns [c80e0f5b7387] <==
	* [INFO] 172.17.0.1:12587 - 21382 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041125s
	[INFO] 172.17.0.1:10074 - 15333 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000021s
	[INFO] 172.17.0.1:12587 - 23790 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043s
	[INFO] 172.17.0.1:12587 - 48801 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055708s
	[INFO] 172.17.0.1:10074 - 1248 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014333s
	[INFO] 172.17.0.1:10074 - 26004 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012083s
	[INFO] 172.17.0.1:12587 - 26908 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052082s
	[INFO] 172.17.0.1:10074 - 59668 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012458s
	[INFO] 172.17.0.1:10074 - 49661 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014375s
	[INFO] 172.17.0.1:10074 - 15503 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013s
	[INFO] 172.17.0.1:10074 - 60078 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015499s
	[INFO] 172.17.0.1:53530 - 11731 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032958s
	[INFO] 172.17.0.1:8443 - 38693 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043916s
	[INFO] 172.17.0.1:53530 - 31552 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036333s
	[INFO] 172.17.0.1:53530 - 42959 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014875s
	[INFO] 172.17.0.1:8443 - 58262 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016s
	[INFO] 172.17.0.1:53530 - 3796 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013625s
	[INFO] 172.17.0.1:8443 - 53112 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022833s
	[INFO] 172.17.0.1:53530 - 40111 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008375s
	[INFO] 172.17.0.1:8443 - 48104 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031999s
	[INFO] 172.17.0.1:8443 - 16639 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012417s
	[INFO] 172.17.0.1:53530 - 53611 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012959s
	[INFO] 172.17.0.1:8443 - 62962 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012s
	[INFO] 172.17.0.1:53530 - 56279 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032624s
	[INFO] 172.17.0.1:8443 - 46956 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-659000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-659000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eafc8e84d7336f18f4fb303d71d15fbd84fd16d5
	                    minikube.k8s.io/name=ingress-addon-legacy-659000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T09_54_30_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 16:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-659000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 16:55:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 16:55:36 +0000   Sat, 10 Jun 2023 16:54:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 16:55:36 +0000   Sat, 10 Jun 2023 16:54:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 16:55:36 +0000   Sat, 10 Jun 2023 16:54:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 16:55:36 +0000   Sat, 10 Jun 2023 16:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-659000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003892Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003892Ki
	  pods:               110
	System Info:
	  Machine ID:                 369bf3002cbb405597503edc6f5bf389
	  System UUID:                369bf3002cbb405597503edc6f5bf389
	  Boot ID:                    f4b0edee-e75c-439f-a5c6-9d6aeb480bd2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-pvd5j                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 coredns-66bff467f8-2x26t                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-659000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-659000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-659000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-h76br                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-ingress-addon-legacy-659000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 88s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s   kubelet     Node ingress-addon-legacy-659000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s   kubelet     Node ingress-addon-legacy-659000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s   kubelet     Node ingress-addon-legacy-659000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s   kubelet     Node ingress-addon-legacy-659000 status is now: NodeReady
	  Normal  Starting                 78s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun10 16:53] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.622293] EINJ: EINJ table not found.
	[  +0.483458] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043657] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000836] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jun10 16:54] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.084413] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +3.469465] systemd-fstab-generator[805]: Ignoring "noauto" for root device
	[  +1.673808] systemd-fstab-generator[977]: Ignoring "noauto" for root device
	[  +0.172207] systemd-fstab-generator[1012]: Ignoring "noauto" for root device
	[  +0.067665] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +0.083636] systemd-fstab-generator[1036]: Ignoring "noauto" for root device
	[  +1.152075] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.128262] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[  +4.465907] systemd-fstab-generator[1729]: Ignoring "noauto" for root device
	[  +7.735059] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.087557] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.145778] systemd-fstab-generator[2821]: Ignoring "noauto" for root device
	[ +15.980532] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.836773] kauditd_printk_skb: 7 callbacks suppressed
	[Jun10 16:55] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +29.876284] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [4e15caa6d608] <==
	* raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/06/10 16:54:24 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-10 16:54:24.510221 W | auth: simple token is not cryptographically signed
	2023-06-10 16:54:24.511710 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-06-10 16:54:24.515057 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-10 16:54:24.515119 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-10 16:54:24.515171 I | embed: listening for peers on 192.168.105.6:2380
	2023-06-10 16:54:24.515193 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-10 16:54:24.515354 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/06/10 16:54:24 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/06/10 16:54:24 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-06-10 16:54:24.906558 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-10 16:54:24.907103 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-10 16:54:24.907153 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-10 16:54:24.907181 I | etcdserver: published {Name:ingress-addon-legacy-659000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-06-10 16:54:24.907243 I | embed: ready to serve client requests
	2023-06-10 16:54:24.908670 I | embed: ready to serve client requests
	2023-06-10 16:54:24.909002 I | embed: serving client requests on 192.168.105.6:2379
	2023-06-10 16:54:24.913461 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  16:56:04 up 2 min,  0 users,  load average: 0.87, 0.34, 0.13
	Linux ingress-addon-legacy-659000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [641c636cfea9] <==
	* I0610 16:54:26.973991       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0610 16:54:27.003737       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0610 16:54:27.072762       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 16:54:27.072857       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0610 16:54:27.072863       1 cache.go:39] Caches are synced for autoregister controller
	I0610 16:54:27.072882       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 16:54:27.074023       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0610 16:54:27.971279       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0610 16:54:27.971337       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 16:54:27.983559       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0610 16:54:27.989351       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0610 16:54:27.989380       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0610 16:54:28.135368       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 16:54:28.145307       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0610 16:54:28.249547       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0610 16:54:28.249935       1 controller.go:609] quota admission added evaluator for: endpoints
	I0610 16:54:28.251728       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 16:54:29.268996       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0610 16:54:29.741193       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0610 16:54:29.950202       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0610 16:54:36.592488       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 16:54:45.119871       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0610 16:54:45.530603       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0610 16:54:55.759413       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0610 16:55:20.777834       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [2f7068c1943e] <==
	* W0610 16:54:45.549489       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-659000. Assuming now as a timestamp.
	I0610 16:54:45.549615       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0610 16:54:45.549727       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0610 16:54:45.550006       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-659000", UID:"028a6c09-10ba-41aa-b9bb-7bf16931dc12", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-659000 event: Registered Node ingress-addon-legacy-659000 in Controller
	E0610 16:54:45.563605       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"a8ce9c17-f2d4-4e4d-bf09-66c9e85a1f29", ResourceVersion:"214", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63822012869, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000ee7080), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000ee70a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000ee70c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001924480), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000ee70e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000ee7100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000ee7140)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40013b3680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40005d72c8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002080e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40007c7010)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40005d7378)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0610 16:54:45.574724       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 16:54:45.586268       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0610 16:54:45.617477       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b8098da7-b925-4cf8-80f5-32ce26565292", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0610 16:54:45.625109       1 shared_informer.go:230] Caches are synced for expand 
	I0610 16:54:45.637012       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4b26b4da-9966-4e57-a660-27d665d8b407", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-8p9dc
	I0610 16:54:45.665283       1 shared_informer.go:230] Caches are synced for PV protection 
	I0610 16:54:45.668649       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 16:54:45.670869       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0610 16:54:45.674148       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 16:54:45.688523       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 16:54:45.688539       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 16:54:45.715915       1 shared_informer.go:230] Caches are synced for attach detach 
	I0610 16:54:55.752199       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c8049437-d0cd-40d9-ae15-d6d73b446c31", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0610 16:54:55.763907       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"2ee396c9-83fa-4afc-b823-12e82036a506", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-q4jcj
	I0610 16:54:55.776609       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"50ab2094-1b87-4754-a727-bc396849ff23", APIVersion:"batch/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-97hzl
	I0610 16:54:55.784775       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5e528e04-5759-477e-97f3-ac200b75f9c7", APIVersion:"batch/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-bk66j
	I0610 16:54:58.787148       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"50ab2094-1b87-4754-a727-bc396849ff23", APIVersion:"batch/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 16:54:59.860168       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5e528e04-5759-477e-97f3-ac200b75f9c7", APIVersion:"batch/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 16:55:31.055145       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"14cf6ef8-9753-4b28-8560-257c07d7d389", APIVersion:"apps/v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0610 16:55:31.058672       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"08226c71-9698-43cc-89ee-63db5e5b8ac6", APIVersion:"apps/v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-pvd5j
	
	* 
	* ==> kube-proxy [7eee54c94ef7] <==
	* W0610 16:54:46.025751       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0610 16:54:46.029784       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0610 16:54:46.029799       1 server_others.go:186] Using iptables Proxier.
	I0610 16:54:46.029927       1 server.go:583] Version: v1.18.20
	I0610 16:54:46.031150       1 config.go:315] Starting service config controller
	I0610 16:54:46.031406       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0610 16:54:46.031648       1 config.go:133] Starting endpoints config controller
	I0610 16:54:46.031669       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0610 16:54:46.131775       1 shared_informer.go:230] Caches are synced for service config 
	I0610 16:54:46.131778       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f60c03150b91] <==
	* W0610 16:54:27.004561       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 16:54:27.028192       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0610 16:54:27.028204       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0610 16:54:27.029117       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0610 16:54:27.030053       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0610 16:54:27.030201       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 16:54:27.030208       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 16:54:27.030867       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 16:54:27.030911       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:54:27.030952       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 16:54:27.030991       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 16:54:27.031026       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:54:27.031093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 16:54:27.031142       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 16:54:27.031208       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 16:54:27.031257       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 16:54:27.031315       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 16:54:27.031430       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:54:27.031533       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 16:54:27.847944       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 16:54:27.921689       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 16:54:27.921741       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 16:54:28.002885       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 16:54:28.330384       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0610 16:54:45.173314       1 factory.go:503] pod: kube-system/coredns-66bff467f8-2x26t is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 16:53:57 UTC, ends at Sat 2023-06-10 16:56:04 UTC. --
	Jun 10 16:55:36 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:36.290467    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30f5139860c21bcc75b79e7a84dfcc537df4b1ddec99bfb2d2de82c17dd22ec4
	Jun 10 16:55:36 ingress-addon-legacy-659000 kubelet[2827]: E0610 16:55:36.291391    2827 pod_workers.go:191] Error syncing pod a85695e0-b597-434c-a4f2-e60b73f343f9 ("hello-world-app-5f5d8b66bb-pvd5j_default(a85695e0-b597-434c-a4f2-e60b73f343f9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-pvd5j_default(a85695e0-b597-434c-a4f2-e60b73f343f9)"
	Jun 10 16:55:45 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:45.616127    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f70b028137eaaa230257df29c1716ec451a55481dd1643db7aa95d0ea44f7c4b
	Jun 10 16:55:45 ingress-addon-legacy-659000 kubelet[2827]: E0610 16:55:45.617889    2827 pod_workers.go:191] Error syncing pod e4f937a1-217e-4b2e-8f35-690ef1ae3973 ("kube-ingress-dns-minikube_kube-system(e4f937a1-217e-4b2e-8f35-690ef1ae3973)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e4f937a1-217e-4b2e-8f35-690ef1ae3973)"
	Jun 10 16:55:46 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:46.473784    2827 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-klmm9" (UniqueName: "kubernetes.io/secret/e4f937a1-217e-4b2e-8f35-690ef1ae3973-minikube-ingress-dns-token-klmm9") pod "e4f937a1-217e-4b2e-8f35-690ef1ae3973" (UID: "e4f937a1-217e-4b2e-8f35-690ef1ae3973")
	Jun 10 16:55:46 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:46.478950    2827 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f937a1-217e-4b2e-8f35-690ef1ae3973-minikube-ingress-dns-token-klmm9" (OuterVolumeSpecName: "minikube-ingress-dns-token-klmm9") pod "e4f937a1-217e-4b2e-8f35-690ef1ae3973" (UID: "e4f937a1-217e-4b2e-8f35-690ef1ae3973"). InnerVolumeSpecName "minikube-ingress-dns-token-klmm9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:55:46 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:46.578187    2827 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-klmm9" (UniqueName: "kubernetes.io/secret/e4f937a1-217e-4b2e-8f35-690ef1ae3973-minikube-ingress-dns-token-klmm9") on node "ingress-addon-legacy-659000" DevicePath ""
	Jun 10 16:55:47 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:47.468706    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f70b028137eaaa230257df29c1716ec451a55481dd1643db7aa95d0ea44f7c4b
	Jun 10 16:55:51 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:51.616669    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30f5139860c21bcc75b79e7a84dfcc537df4b1ddec99bfb2d2de82c17dd22ec4
	Jun 10 16:55:51 ingress-addon-legacy-659000 kubelet[2827]: W0610 16:55:51.740954    2827 container.go:412] Failed to create summary reader for "/kubepods/besteffort/poda85695e0-b597-434c-a4f2-e60b73f343f9/5ec2423683c9836af3b58fbc3f51e37b3dc4b0c5b60d7871ba4dfb54e60edc4d": none of the resources are being tracked.
	Jun 10 16:55:52 ingress-addon-legacy-659000 kubelet[2827]: W0610 16:55:52.556109    2827 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-pvd5j through plugin: invalid network status for
	Jun 10 16:55:52 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:52.563857    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30f5139860c21bcc75b79e7a84dfcc537df4b1ddec99bfb2d2de82c17dd22ec4
	Jun 10 16:55:52 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:55:52.565270    2827 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5ec2423683c9836af3b58fbc3f51e37b3dc4b0c5b60d7871ba4dfb54e60edc4d
	Jun 10 16:55:52 ingress-addon-legacy-659000 kubelet[2827]: E0610 16:55:52.565625    2827 pod_workers.go:191] Error syncing pod a85695e0-b597-434c-a4f2-e60b73f343f9 ("hello-world-app-5f5d8b66bb-pvd5j_default(a85695e0-b597-434c-a4f2-e60b73f343f9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-pvd5j_default(a85695e0-b597-434c-a4f2-e60b73f343f9)"
	Jun 10 16:55:53 ingress-addon-legacy-659000 kubelet[2827]: W0610 16:55:53.582478    2827 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-pvd5j through plugin: invalid network status for
	Jun 10 16:55:57 ingress-addon-legacy-659000 kubelet[2827]: E0610 16:55:57.082447    2827 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-q4jcj.17675a8f5251c3aa", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-q4jcj", UID:"e9261c55-9eec-4b7a-8d6f-028cf9433550", APIVersion:"v1", ResourceVersion:"442", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-659000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11948a744dae1aa, ext:86907189935, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11948a744dae1aa, ext:86907189935, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-q4jcj.17675a8f5251c3aa" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 16:55:57 ingress-addon-legacy-659000 kubelet[2827]: E0610 16:55:57.093609    2827 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-q4jcj.17675a8f5251c3aa", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-q4jcj", UID:"e9261c55-9eec-4b7a-8d6f-028cf9433550", APIVersion:"v1", ResourceVersion:"442", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-659000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11948a744dae1aa, ext:86907189935, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11948a745252bda, ext:86912058591, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-q4jcj.17675a8f5251c3aa" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 16:55:59 ingress-addon-legacy-659000 kubelet[2827]: W0610 16:55:59.722543    2827 pod_container_deletor.go:77] Container "46f2784546e8ed9d71e6f0ab99690547791bd547e216cca18382fa39c5cce69d" not found in pod's containers
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.243521    2827 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wfztt" (UniqueName: "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-ingress-nginx-token-wfztt") pod "e9261c55-9eec-4b7a-8d6f-028cf9433550" (UID: "e9261c55-9eec-4b7a-8d6f-028cf9433550")
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.243620    2827 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-webhook-cert") pod "e9261c55-9eec-4b7a-8d6f-028cf9433550" (UID: "e9261c55-9eec-4b7a-8d6f-028cf9433550")
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.254722    2827 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-ingress-nginx-token-wfztt" (OuterVolumeSpecName: "ingress-nginx-token-wfztt") pod "e9261c55-9eec-4b7a-8d6f-028cf9433550" (UID: "e9261c55-9eec-4b7a-8d6f-028cf9433550"). InnerVolumeSpecName "ingress-nginx-token-wfztt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.255533    2827 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e9261c55-9eec-4b7a-8d6f-028cf9433550" (UID: "e9261c55-9eec-4b7a-8d6f-028cf9433550"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.345109    2827 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wfztt" (UniqueName: "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-ingress-nginx-token-wfztt") on node "ingress-addon-legacy-659000" DevicePath ""
	Jun 10 16:56:01 ingress-addon-legacy-659000 kubelet[2827]: I0610 16:56:01.345198    2827 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e9261c55-9eec-4b7a-8d6f-028cf9433550-webhook-cert") on node "ingress-addon-legacy-659000" DevicePath ""
	Jun 10 16:56:02 ingress-addon-legacy-659000 kubelet[2827]: W0610 16:56:02.644652    2827 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/e9261c55-9eec-4b7a-8d6f-028cf9433550/volumes" does not exist
	
	* 
	* ==> storage-provisioner [272cb6f9abaa] <==
	* I0610 16:54:48.901888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 16:54:48.906357       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 16:54:48.906410       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 16:54:48.908950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 16:54:48.908970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85887b6d-b422-40d5-8bae-deaf4136be7f", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-659000_bc35947c-3b34-462d-b7aa-a7b5f63196b1 became leader
	I0610 16:54:48.909098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-659000_bc35947c-3b34-462d-b7aa-a7b5f63196b1!
	I0610 16:54:49.009391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-659000_bc35947c-3b34-462d-b7aa-a7b5f63196b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-659000 -n ingress-addon-legacy-659000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-659000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.96s)

                                                
                                    
x
+
TestMinikubeProfile (21.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-847000 --driver=qemu2 
E0610 09:57:05.010157    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:57:15.252498    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-847000 --driver=qemu2 : exit status 90 (21.440503667s)

                                                
                                                
-- stdout --
	* [first-847000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-847000 in cluster first-847000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-847000 --driver=qemu2 ": exit status 90
panic.go:522: *** TestMinikubeProfile FAILED at 2023-06-10 09:57:24.884181 -0700 PDT m=+2175.936228168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-848000 -n second-848000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-848000 -n second-848000: exit status 85 (42.419542ms)

                                                
                                                
-- stdout --
	* Profile "second-848000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-848000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-848000" host is not running, skipping log retrieval (state="* Profile \"second-848000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-848000\"")
helpers_test.go:175: Cleaning up "second-848000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-848000
panic.go:522: *** TestMinikubeProfile FAILED at 2023-06-10 09:57:25.155874 -0700 PDT m=+2176.207925001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-847000 -n first-847000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-847000 -n first-847000: exit status 6 (78.773042ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:57:25.230375    3445 status.go:415] kubeconfig endpoint: extract IP: "first-847000" does not appear in /Users/jenkins/minikube-integration/16578-1150/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "first-847000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "first-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-847000
--- FAIL: TestMinikubeProfile (21.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (101.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-422000 ssh -- ls /minikube-host
E0610 09:58:16.696456    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:58:39.663481    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:59:07.360932    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p mount-start-2-422000 ssh -- ls /minikube-host: exit status 1 (1m15.037349667s)

                                                
                                                
** stderr ** 
	ssh: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
mount_start_test.go:116: mount failed: "out/minikube-darwin-arm64 -p mount-start-2-422000 ssh -- ls /minikube-host" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-422000 -n mount-start-2-422000
E0610 09:59:38.617107    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-422000 -n mount-start-2-422000: exit status 3 (26.000861208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 09:59:43.527115    3503 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out
	E0610 09:59:43.527150    3503 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "mount-start-2-422000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (101.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (378.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-arm64 -p multinode-171000 node stop m03: (3.071572583s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status
E0610 10:01:54.748459    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 10:02:22.457269    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 10:02:53.348052    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:03:39.658908    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status: exit status 7 (2m30.082481708s)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-171000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:03:07.912281    3692 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:03:07.912362    3692 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:04:22.919595    3692 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0610 10:04:22.919672    3692 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr
E0610 10:05:09.484281    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:05:37.188057    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr: exit status 7 (2m30.084193125s)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-171000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:04:22.990562    3704 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:04:22.990764    3704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:04:22.990768    3704 out.go:309] Setting ErrFile to fd 2...
	I0610 10:04:22.990772    3704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:04:22.990861    3704 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:04:22.991016    3704 out.go:303] Setting JSON to false
	I0610 10:04:22.991030    3704 mustload.go:65] Loading cluster: multinode-171000
	I0610 10:04:22.991228    3704 notify.go:220] Checking for updates...
	I0610 10:04:22.991986    3704 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:04:22.992002    3704 status.go:255] checking status of multinode-171000 ...
	I0610 10:04:22.993101    3704 status.go:330] multinode-171000 host status = "Running" (err=<nil>)
	I0610 10:04:22.993114    3704 host.go:66] Checking if "multinode-171000" exists ...
	I0610 10:04:22.993261    3704 host.go:66] Checking if "multinode-171000" exists ...
	I0610 10:04:22.993412    3704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:04:22.993428    3704 sshutil.go:53] new ssh client: &{IP:192.168.105.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/id_rsa Username:docker}
	W0610 10:05:37.994415    3704 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.11:22: connect: operation timed out
	W0610 10:05:37.996659    3704 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:05:37.996710    3704 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0610 10:05:37.996739    3704 status.go:257] multinode-171000 status: &{Name:multinode-171000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:05:37.996787    3704 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0610 10:05:37.996809    3704 status.go:255] checking status of multinode-171000-m02 ...
	I0610 10:05:37.999786    3704 status.go:330] multinode-171000-m02 host status = "Running" (err=<nil>)
	I0610 10:05:37.999810    3704 host.go:66] Checking if "multinode-171000-m02" exists ...
	I0610 10:05:38.000258    3704 host.go:66] Checking if "multinode-171000-m02" exists ...
	I0610 10:05:38.000758    3704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:05:38.000786    3704 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m02/id_rsa Username:docker}
	W0610 10:06:53.002432    3704 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.12:22: connect: operation timed out
	W0610 10:06:53.002601    3704 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0610 10:06:53.002639    3704 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0610 10:06:53.002661    3704 status.go:257] multinode-171000-m02 status: &{Name:multinode-171000-m02 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:06:53.002704    3704 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0610 10:06:53.002724    3704 status.go:255] checking status of multinode-171000-m03 ...
	I0610 10:06:53.003591    3704 status.go:330] multinode-171000-m03 host status = "Stopped" (err=<nil>)
	I0610 10:06:53.003612    3704 status.go:343] host is not running, skipping remaining checks
	I0610 10:06:53.003623    3704 status.go:257] multinode-171000-m03 status: &{Name:multinode-171000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr": multinode-171000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
multinode-171000-m02
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
multinode-171000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
E0610 10:06:54.744236    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 3 (1m15.076601625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:08:08.080111    3729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:08:08.080140    3729 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopNode (378.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (230.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 node start m03 --alsologtostderr: exit status 80 (5.1238605s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-171000-m03 in cluster multinode-171000
	* Restarting existing qemu2 VM for "multinode-171000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-171000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:08:08.145572    3739 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:08:08.145842    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:08:08.145846    3739 out.go:309] Setting ErrFile to fd 2...
	I0610 10:08:08.145850    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:08:08.145945    3739 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:08:08.146232    3739 mustload.go:65] Loading cluster: multinode-171000
	I0610 10:08:08.146477    3739 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0610 10:08:08.146726    3739 host.go:58] "multinode-171000-m03" host status: Stopped
	I0610 10:08:08.150909    3739 out.go:177] * Starting worker node multinode-171000-m03 in cluster multinode-171000
	I0610 10:08:08.154891    3739 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:08:08.154917    3739 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:08:08.154934    3739 cache.go:57] Caching tarball of preloaded images
	I0610 10:08:08.155038    3739 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:08:08.155044    3739 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:08:08.155127    3739 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/multinode-171000/config.json ...
	I0610 10:08:08.155466    3739 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:08:08.155476    3739 start.go:364] acquiring machines lock for multinode-171000-m03: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:08:08.155524    3739 start.go:368] acquired machines lock for "multinode-171000-m03" in 33.5µs
	I0610 10:08:08.155535    3739 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:08:08.155539    3739 fix.go:55] fixHost starting: m03
	I0610 10:08:08.155660    3739 fix.go:103] recreateIfNeeded on multinode-171000-m03: state=Stopped err=<nil>
	W0610 10:08:08.155667    3739 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:08:08.159786    3739 out.go:177] * Restarting existing qemu2 VM for "multinode-171000-m03" ...
	I0610 10:08:08.163855    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:89:2f:78:2f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/disk.qcow2
	I0610 10:08:08.166707    3739 main.go:141] libmachine: STDOUT: 
	I0610 10:08:08.166728    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:08:08.166771    3739 fix.go:57] fixHost completed within 11.230166ms
	I0610 10:08:08.166777    3739 start.go:83] releasing machines lock for "multinode-171000-m03", held for 11.248041ms
	W0610 10:08:08.166786    3739 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:08:08.166825    3739 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:08:08.166830    3739 start.go:702] Will try again in 5 seconds ...
	I0610 10:08:13.168860    3739 start.go:364] acquiring machines lock for multinode-171000-m03: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:08:13.169100    3739 start.go:368] acquired machines lock for "multinode-171000-m03" in 192.583µs
	I0610 10:08:13.169201    3739 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:08:13.169213    3739 fix.go:55] fixHost starting: m03
	I0610 10:08:13.169799    3739 fix.go:103] recreateIfNeeded on multinode-171000-m03: state=Stopped err=<nil>
	W0610 10:08:13.169814    3739 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:08:13.174076    3739 out.go:177] * Restarting existing qemu2 VM for "multinode-171000-m03" ...
	I0610 10:08:13.178093    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:89:2f:78:2f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/disk.qcow2
	I0610 10:08:13.182739    3739 main.go:141] libmachine: STDOUT: 
	I0610 10:08:13.182779    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:08:13.182828    3739 fix.go:57] fixHost completed within 13.6155ms
	I0610 10:08:13.182842    3739 start.go:83] releasing machines lock for "multinode-171000-m03", held for 13.726333ms
	W0610 10:08:13.182942    3739 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:08:13.188011    3739 out.go:177] 
	W0610 10:08:13.191985    3739 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:08:13.191999    3739 out.go:239] * 
	* 
	W0610 10:08:13.199258    3739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:08:13.202916    3739 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0610 10:08:08.145572    3739 out.go:296] Setting OutFile to fd 1 ...
I0610 10:08:08.145842    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 10:08:08.145846    3739 out.go:309] Setting ErrFile to fd 2...
I0610 10:08:08.145850    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 10:08:08.145945    3739 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 10:08:08.146232    3739 mustload.go:65] Loading cluster: multinode-171000
I0610 10:08:08.146477    3739 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
W0610 10:08:08.146726    3739 host.go:58] "multinode-171000-m03" host status: Stopped
I0610 10:08:08.150909    3739 out.go:177] * Starting worker node multinode-171000-m03 in cluster multinode-171000
I0610 10:08:08.154891    3739 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0610 10:08:08.154917    3739 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
I0610 10:08:08.154934    3739 cache.go:57] Caching tarball of preloaded images
I0610 10:08:08.155038    3739 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0610 10:08:08.155044    3739 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
I0610 10:08:08.155127    3739 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/multinode-171000/config.json ...
I0610 10:08:08.155466    3739 cache.go:195] Successfully downloaded all kic artifacts
I0610 10:08:08.155476    3739 start.go:364] acquiring machines lock for multinode-171000-m03: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 10:08:08.155524    3739 start.go:368] acquired machines lock for "multinode-171000-m03" in 33.5µs
I0610 10:08:08.155535    3739 start.go:96] Skipping create...Using existing machine configuration
I0610 10:08:08.155539    3739 fix.go:55] fixHost starting: m03
I0610 10:08:08.155660    3739 fix.go:103] recreateIfNeeded on multinode-171000-m03: state=Stopped err=<nil>
W0610 10:08:08.155667    3739 fix.go:129] unexpected machine state, will restart: <nil>
I0610 10:08:08.159786    3739 out.go:177] * Restarting existing qemu2 VM for "multinode-171000-m03" ...
I0610 10:08:08.163855    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:89:2f:78:2f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/disk.qcow2
I0610 10:08:08.166707    3739 main.go:141] libmachine: STDOUT: 
I0610 10:08:08.166728    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 10:08:08.166771    3739 fix.go:57] fixHost completed within 11.230166ms
I0610 10:08:08.166777    3739 start.go:83] releasing machines lock for "multinode-171000-m03", held for 11.248041ms
W0610 10:08:08.166786    3739 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 10:08:08.166825    3739 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 10:08:08.166830    3739 start.go:702] Will try again in 5 seconds ...
I0610 10:08:13.168860    3739 start.go:364] acquiring machines lock for multinode-171000-m03: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 10:08:13.169100    3739 start.go:368] acquired machines lock for "multinode-171000-m03" in 192.583µs
I0610 10:08:13.169201    3739 start.go:96] Skipping create...Using existing machine configuration
I0610 10:08:13.169213    3739 fix.go:55] fixHost starting: m03
I0610 10:08:13.169799    3739 fix.go:103] recreateIfNeeded on multinode-171000-m03: state=Stopped err=<nil>
W0610 10:08:13.169814    3739 fix.go:129] unexpected machine state, will restart: <nil>
I0610 10:08:13.174076    3739 out.go:177] * Restarting existing qemu2 VM for "multinode-171000-m03" ...
I0610 10:08:13.178093    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:89:2f:78:2f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000-m03/disk.qcow2
I0610 10:08:13.182739    3739 main.go:141] libmachine: STDOUT: 
I0610 10:08:13.182779    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 10:08:13.182828    3739 fix.go:57] fixHost completed within 13.6155ms
I0610 10:08:13.182842    3739 start.go:83] releasing machines lock for "multinode-171000-m03", held for 13.726333ms
W0610 10:08:13.182942    3739 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 10:08:13.188011    3739 out.go:177] 
W0610 10:08:13.191985    3739 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 10:08:13.191999    3739 out.go:239] * 
* 
W0610 10:08:13.199258    3739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 10:08:13.202916    3739 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-171000 node start m03 --alsologtostderr": exit status 80
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status
E0610 10:08:39.654654    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 10:10:02.713636    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 10:10:09.478437    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status: exit status 7 (2m30.064063625s)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-171000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:09:28.258459    3743 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:09:28.258542    3743 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:10:43.264815    3743 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0610 10:10:43.264888    3743 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-171000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
E0610 10:11:54.739613    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 3 (1m15.076438791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:11:58.341708    3761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0610 10:11:58.341772    3761 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StartAfterStop (230.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (41.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-171000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-171000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-171000: (36.158794459s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-171000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-171000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.210240875s)

                                                
                                                
-- stdout --
	* [multinode-171000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-171000 in cluster multinode-171000
	* Restarting existing qemu2 VM for "multinode-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:12:34.639679    3791 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:12:34.639862    3791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:34.639866    3791 out.go:309] Setting ErrFile to fd 2...
	I0610 10:12:34.639870    3791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:34.639980    3791 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:12:34.641303    3791 out.go:303] Setting JSON to false
	I0610 10:12:34.660932    3791 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4325,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:12:34.660993    3791 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:12:34.665866    3791 out.go:177] * [multinode-171000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:12:34.670842    3791 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:12:34.670862    3791 notify.go:220] Checking for updates...
	I0610 10:12:34.674921    3791 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:12:34.677826    3791 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:12:34.680900    3791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:12:34.683883    3791 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:12:34.686928    3791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:12:34.690195    3791 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:12:34.690253    3791 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:12:34.694894    3791 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:12:34.701793    3791 start.go:297] selected driver: qemu2
	I0610 10:12:34.701797    3791 start.go:875] validating driver "qemu2" against &{Name:multinode-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false ina
ccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:12:34.701860    3791 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:12:34.704213    3791 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:12:34.704238    3791 cni.go:84] Creating CNI manager for ""
	I0610 10:12:34.704242    3791 cni.go:136] 3 nodes found, recommending kindnet
	I0610 10:12:34.704251    3791 start_flags.go:319] config:
	{Name:multinode-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-171000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:12:34.704434    3791 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:34.711864    3791 out.go:177] * Starting control plane node multinode-171000 in cluster multinode-171000
	I0610 10:12:34.715831    3791 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:12:34.715861    3791 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:12:34.715881    3791 cache.go:57] Caching tarball of preloaded images
	I0610 10:12:34.715955    3791 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:12:34.715961    3791 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:12:34.716064    3791 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/multinode-171000/config.json ...
	I0610 10:12:34.716434    3791 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:12:34.716448    3791 start.go:364] acquiring machines lock for multinode-171000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:12:34.716481    3791 start.go:368] acquired machines lock for "multinode-171000" in 26.917µs
	I0610 10:12:34.716492    3791 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:12:34.716497    3791 fix.go:55] fixHost starting: 
	I0610 10:12:34.716624    3791 fix.go:103] recreateIfNeeded on multinode-171000: state=Stopped err=<nil>
	W0610 10:12:34.716632    3791 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:12:34.720751    3791 out.go:177] * Restarting existing qemu2 VM for "multinode-171000" ...
	I0610 10:12:34.727802    3791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d7:dc:26:7c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/disk.qcow2
	I0610 10:12:34.729581    3791 main.go:141] libmachine: STDOUT: 
	I0610 10:12:34.729596    3791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:12:34.729625    3791 fix.go:57] fixHost completed within 13.128375ms
	I0610 10:12:34.729630    3791 start.go:83] releasing machines lock for "multinode-171000", held for 13.145667ms
	W0610 10:12:34.729637    3791 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:12:34.729674    3791 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:34.729683    3791 start.go:702] Will try again in 5 seconds ...
	I0610 10:12:39.731729    3791 start.go:364] acquiring machines lock for multinode-171000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:12:39.732229    3791 start.go:368] acquired machines lock for "multinode-171000" in 404.292µs
	I0610 10:12:39.732392    3791 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:12:39.732413    3791 fix.go:55] fixHost starting: 
	I0610 10:12:39.733141    3791 fix.go:103] recreateIfNeeded on multinode-171000: state=Stopped err=<nil>
	W0610 10:12:39.733168    3791 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:12:39.736534    3791 out.go:177] * Restarting existing qemu2 VM for "multinode-171000" ...
	I0610 10:12:39.740767    3791 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d7:dc:26:7c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/disk.qcow2
	I0610 10:12:39.749950    3791 main.go:141] libmachine: STDOUT: 
	I0610 10:12:39.749996    3791 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:12:39.750084    3791 fix.go:57] fixHost completed within 17.673042ms
	I0610 10:12:39.750101    3791 start.go:83] releasing machines lock for "multinode-171000", held for 17.843083ms
	W0610 10:12:39.750307    3791 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:39.757450    3791 out.go:177] 
	W0610 10:12:39.761663    3791 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:12:39.761687    3791 out.go:239] * 
	* 
	W0610 10:12:39.764298    3791 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:12:39.776477    3791 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-171000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-171000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 7 (31.921625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (41.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 node delete m03: exit status 89 (39.718541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-171000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-171000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr: exit status 7 (28.85525ms)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-171000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:12:39.955288    3804 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:12:39.955407    3804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:39.955410    3804 out.go:309] Setting ErrFile to fd 2...
	I0610 10:12:39.955413    3804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:39.955480    3804 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:12:39.955592    3804 out.go:303] Setting JSON to false
	I0610 10:12:39.955606    3804 mustload.go:65] Loading cluster: multinode-171000
	I0610 10:12:39.955653    3804 notify.go:220] Checking for updates...
	I0610 10:12:39.955782    3804 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:12:39.955787    3804 status.go:255] checking status of multinode-171000 ...
	I0610 10:12:39.955973    3804 status.go:330] multinode-171000 host status = "Stopped" (err=<nil>)
	I0610 10:12:39.955977    3804 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:39.955979    3804 status.go:257] multinode-171000 status: &{Name:multinode-171000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:12:39.955989    3804 status.go:255] checking status of multinode-171000-m02 ...
	I0610 10:12:39.956082    3804 status.go:330] multinode-171000-m02 host status = "Stopped" (err=<nil>)
	I0610 10:12:39.956084    3804 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:39.956086    3804 status.go:257] multinode-171000-m02 status: &{Name:multinode-171000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:12:39.956089    3804 status.go:255] checking status of multinode-171000-m03 ...
	I0610 10:12:39.956177    3804 status.go:330] multinode-171000-m03 host status = "Stopped" (err=<nil>)
	I0610 10:12:39.956179    3804 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:39.956181    3804 status.go:257] multinode-171000-m03 status: &{Name:multinode-171000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 7 (28.512541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status: exit status 7 (30.527208ms)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-171000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr: exit status 7 (28.721666ms)

                                                
                                                
-- stdout --
	multinode-171000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-171000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-171000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:12:40.126671    3812 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:12:40.126821    3812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:40.126826    3812 out.go:309] Setting ErrFile to fd 2...
	I0610 10:12:40.126828    3812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:40.126906    3812 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:12:40.127008    3812 out.go:303] Setting JSON to false
	I0610 10:12:40.127021    3812 mustload.go:65] Loading cluster: multinode-171000
	I0610 10:12:40.127054    3812 notify.go:220] Checking for updates...
	I0610 10:12:40.127210    3812 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:12:40.127217    3812 status.go:255] checking status of multinode-171000 ...
	I0610 10:12:40.127404    3812 status.go:330] multinode-171000 host status = "Stopped" (err=<nil>)
	I0610 10:12:40.127408    3812 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:40.127410    3812 status.go:257] multinode-171000 status: &{Name:multinode-171000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:12:40.127420    3812 status.go:255] checking status of multinode-171000-m02 ...
	I0610 10:12:40.127518    3812 status.go:330] multinode-171000-m02 host status = "Stopped" (err=<nil>)
	I0610 10:12:40.127520    3812 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:40.127522    3812 status.go:257] multinode-171000-m02 status: &{Name:multinode-171000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:12:40.127526    3812 status.go:255] checking status of multinode-171000-m03 ...
	I0610 10:12:40.127613    3812 status.go:330] multinode-171000-m03 host status = "Stopped" (err=<nil>)
	I0610 10:12:40.127615    3812 status.go:343] host is not running, skipping remaining checks
	I0610 10:12:40.127617    3812 status.go:257] multinode-171000-m03 status: &{Name:multinode-171000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr": multinode-171000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-171000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-171000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr": multinode-171000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-171000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-171000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 7 (28.647584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-171000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-171000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176402875s)

                                                
                                                
-- stdout --
	* [multinode-171000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-171000 in cluster multinode-171000
	* Restarting existing qemu2 VM for "multinode-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:12:40.183581    3816 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:12:40.183701    3816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:40.183703    3816 out.go:309] Setting ErrFile to fd 2...
	I0610 10:12:40.183706    3816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:40.183780    3816 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:12:40.184751    3816 out.go:303] Setting JSON to false
	I0610 10:12:40.199784    3816 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4331,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:12:40.199852    3816 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:12:40.203365    3816 out.go:177] * [multinode-171000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:12:40.210223    3816 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:12:40.210289    3816 notify.go:220] Checking for updates...
	I0610 10:12:40.216292    3816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:12:40.219200    3816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:12:40.222293    3816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:12:40.225293    3816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:12:40.228229    3816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:12:40.231557    3816 config.go:182] Loaded profile config "multinode-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:12:40.231822    3816 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:12:40.236286    3816 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:12:40.243215    3816 start.go:297] selected driver: qemu2
	I0610 10:12:40.243220    3816 start.go:875] validating driver "qemu2" against &{Name:multinode-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-171000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:fal
se inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:12:40.243302    3816 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:12:40.245133    3816 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:12:40.245156    3816 cni.go:84] Creating CNI manager for ""
	I0610 10:12:40.245160    3816 cni.go:136] 3 nodes found, recommending kindnet
	I0610 10:12:40.245165    3816 start_flags.go:319] config:
	{Name:multinode-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-171000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:f
alse istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clien
t SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:12:40.245292    3816 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:40.254202    3816 out.go:177] * Starting control plane node multinode-171000 in cluster multinode-171000
	I0610 10:12:40.258301    3816 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:12:40.258319    3816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:12:40.258334    3816 cache.go:57] Caching tarball of preloaded images
	I0610 10:12:40.258391    3816 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:12:40.258396    3816 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:12:40.258494    3816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/multinode-171000/config.json ...
	I0610 10:12:40.258865    3816 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:12:40.258874    3816 start.go:364] acquiring machines lock for multinode-171000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:12:40.258901    3816 start.go:368] acquired machines lock for "multinode-171000" in 21.708µs
	I0610 10:12:40.258912    3816 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:12:40.258917    3816 fix.go:55] fixHost starting: 
	I0610 10:12:40.259039    3816 fix.go:103] recreateIfNeeded on multinode-171000: state=Stopped err=<nil>
	W0610 10:12:40.259047    3816 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:12:40.262265    3816 out.go:177] * Restarting existing qemu2 VM for "multinode-171000" ...
	I0610 10:12:40.270323    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d7:dc:26:7c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/disk.qcow2
	I0610 10:12:40.272105    3816 main.go:141] libmachine: STDOUT: 
	I0610 10:12:40.272123    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:12:40.272154    3816 fix.go:57] fixHost completed within 13.2365ms
	I0610 10:12:40.272160    3816 start.go:83] releasing machines lock for "multinode-171000", held for 13.2545ms
	W0610 10:12:40.272167    3816 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:12:40.272203    3816 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:40.272208    3816 start.go:702] Will try again in 5 seconds ...
	I0610 10:12:45.274282    3816 start.go:364] acquiring machines lock for multinode-171000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:12:45.274640    3816 start.go:368] acquired machines lock for "multinode-171000" in 275.5µs
	I0610 10:12:45.274785    3816 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:12:45.274806    3816 fix.go:55] fixHost starting: 
	I0610 10:12:45.275544    3816 fix.go:103] recreateIfNeeded on multinode-171000: state=Stopped err=<nil>
	W0610 10:12:45.275576    3816 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:12:45.279934    3816 out.go:177] * Restarting existing qemu2 VM for "multinode-171000" ...
	I0610 10:12:45.293033    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d7:dc:26:7c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/multinode-171000/disk.qcow2
	I0610 10:12:45.301963    3816 main.go:141] libmachine: STDOUT: 
	I0610 10:12:45.302013    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:12:45.302089    3816 fix.go:57] fixHost completed within 27.288166ms
	I0610 10:12:45.302104    3816 start.go:83] releasing machines lock for "multinode-171000", held for 27.442417ms
	W0610 10:12:45.302279    3816 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:45.309718    3816 out.go:177] 
	W0610 10:12:45.313958    3816 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:12:45.313981    3816 out.go:239] * 
	* 
	W0610 10:12:45.316479    3816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:12:45.326890    3816 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-171000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 7 (68.51075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (10.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-171000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-171000-m03 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-171000-m03 --driver=qemu2 : exit status 14 (98.647041ms)

                                                
                                                
-- stdout --
	* [multinode-171000-m03] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-171000-m03' is duplicated with machine name 'multinode-171000-m03' in profile 'multinode-171000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-171000-m04 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-171000-m04 --driver=qemu2 : exit status 80 (9.860213291s)

                                                
                                                
-- stdout --
	* [multinode-171000-m04] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-171000-m04 in cluster multinode-171000-m04
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-171000-m04" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-171000-m04" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-171000-m04 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-171000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-171000: exit status 89 (81.271334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-171000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-171000-m04
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-171000 -n multinode-171000: exit status 7 (29.457792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-171000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (10.20s)

                                                
                                    
x
+
TestPreload (10.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.054744s)

                                                
                                                
-- stdout --
	* [test-preload-558000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-558000 in cluster test-preload-558000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:12:55.854177    3858 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:12:55.854328    3858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:55.854330    3858 out.go:309] Setting ErrFile to fd 2...
	I0610 10:12:55.854333    3858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:12:55.854408    3858 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:12:55.855454    3858 out.go:303] Setting JSON to false
	I0610 10:12:55.871249    3858 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4346,"bootTime":1686412829,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:12:55.871318    3858 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:12:55.876437    3858 out.go:177] * [test-preload-558000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:12:55.883465    3858 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:12:55.883538    3858 notify.go:220] Checking for updates...
	I0610 10:12:55.890407    3858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:12:55.893456    3858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:12:55.896446    3858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:12:55.899449    3858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:12:55.902441    3858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:12:55.905579    3858 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:12:55.909399    3858 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:12:55.915382    3858 start.go:297] selected driver: qemu2
	I0610 10:12:55.915386    3858 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:12:55.915393    3858 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:12:55.917255    3858 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:12:55.920389    3858 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:12:55.923491    3858 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:12:55.923506    3858 cni.go:84] Creating CNI manager for ""
	I0610 10:12:55.923512    3858 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:12:55.923516    3858 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:12:55.923522    3858 start_flags.go:319] config:
	{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:12:55.923604    3858 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.931402    3858 out.go:177] * Starting control plane node test-preload-558000 in cluster test-preload-558000
	I0610 10:12:55.935407    3858 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 10:12:55.935510    3858 cache.go:107] acquiring lock: {Name:mk5e9db964749ce1875223013d924a379c2d67b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935517    3858 cache.go:107] acquiring lock: {Name:mkcbc06311b9288e1d0ed9600e8bbbbaec0c2ea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935525    3858 cache.go:107] acquiring lock: {Name:mk8a7818e8fa8f5ccd4258084f2cfd4698ddc8ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935651    3858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/test-preload-558000/config.json ...
	I0610 10:12:55.935650    3858 cache.go:107] acquiring lock: {Name:mkcedfd1c2e0ec16358b1e0274fdd01916171f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935669    3858 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/test-preload-558000/config.json: {Name:mk686ff599fe585e663ce693174138f6ffac8c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:12:55.935689    3858 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:12:55.935692    3858 cache.go:107] acquiring lock: {Name:mk0259462281ecc3c63e024af548de9ed224f3a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935703    3858 cache.go:107] acquiring lock: {Name:mk02666fd7673e255d8f1c960bbaba1515d27c34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935731    3858 cache.go:107] acquiring lock: {Name:mke1149b25d4c9e6cc72741d1ef766605df08da8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935753    3858 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 10:12:55.935691    3858 cache.go:107] acquiring lock: {Name:mk351be89a9e1f3d14b60ab9544bead791326e8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:12:55.935809    3858 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 10:12:55.935815    3858 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 10:12:55.935918    3858 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 10:12:55.935966    3858 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 10:12:55.936006    3858 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:12:55.936022    3858 start.go:364] acquiring machines lock for test-preload-558000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:12:55.936028    3858 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 10:12:55.936044    3858 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 10:12:55.936062    3858 start.go:368] acquired machines lock for "test-preload-558000" in 31.958µs
	I0610 10:12:55.936076    3858 start.go:93] Provisioning new machine with config: &{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:12:55.936123    3858 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:12:55.944333    3858 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:12:55.948149    3858 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 10:12:55.948775    3858 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 10:12:55.949104    3858 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 10:12:55.949151    3858 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 10:12:55.949167    3858 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 10:12:55.952502    3858 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:12:55.952508    3858 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 10:12:55.952549    3858 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 10:12:55.963279    3858 start.go:159] libmachine.API.Create for "test-preload-558000" (driver="qemu2")
	I0610 10:12:55.963295    3858 client.go:168] LocalClient.Create starting
	I0610 10:12:55.963361    3858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:12:55.963383    3858 main.go:141] libmachine: Decoding PEM data...
	I0610 10:12:55.963398    3858 main.go:141] libmachine: Parsing certificate...
	I0610 10:12:55.963446    3858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:12:55.963462    3858 main.go:141] libmachine: Decoding PEM data...
	I0610 10:12:55.963468    3858 main.go:141] libmachine: Parsing certificate...
	I0610 10:12:55.963767    3858 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:12:56.088538    3858 main.go:141] libmachine: Creating SSH key...
	I0610 10:12:56.193847    3858 main.go:141] libmachine: Creating Disk image...
	I0610 10:12:56.193879    3858 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:12:56.194092    3858 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:12:56.202856    3858 main.go:141] libmachine: STDOUT: 
	I0610 10:12:56.202877    3858 main.go:141] libmachine: STDERR: 
	I0610 10:12:56.202939    3858 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2 +20000M
	I0610 10:12:56.210882    3858 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:12:56.210901    3858 main.go:141] libmachine: STDERR: 
	I0610 10:12:56.210917    3858 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:12:56.210924    3858 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:12:56.210961    3858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:31:54:29:f4:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:12:56.212760    3858 main.go:141] libmachine: STDOUT: 
	I0610 10:12:56.212774    3858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:12:56.212791    3858 client.go:171] LocalClient.Create took 249.494667ms
	I0610 10:12:57.221264    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0610 10:12:57.240888    3858 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 10:12:57.240922    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 10:12:57.277291    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 10:12:57.417567    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0610 10:12:57.417595    3858 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.481941917s
	I0610 10:12:57.417605    3858 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0610 10:12:57.531185    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0610 10:12:57.555422    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0610 10:12:57.897702    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0610 10:12:58.093661    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 10:12:58.212977    3858 start.go:128] duration metric: createHost completed in 2.276860417s
	I0610 10:12:58.213035    3858 start.go:83] releasing machines lock for "test-preload-558000", held for 2.276986209s
	W0610 10:12:58.213098    3858 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:58.225231    3858 out.go:177] * Deleting "test-preload-558000" in qemu2 ...
	W0610 10:12:58.244484    3858 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:12:58.244512    3858 start.go:702] Will try again in 5 seconds ...
	W0610 10:12:58.282863    3858 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 10:12:58.282962    3858 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 10:12:59.087324    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 10:12:59.087365    3858 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.151899209s
	I0610 10:12:59.087391    3858 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 10:12:59.257648    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0610 10:12:59.257695    3858 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.322223458s
	I0610 10:12:59.257719    3858 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0610 10:12:59.865669    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0610 10:12:59.865716    3858 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.930066084s
	I0610 10:12:59.865749    3858 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0610 10:13:00.677602    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0610 10:13:00.677667    3858 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.742147625s
	I0610 10:13:00.677705    3858 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0610 10:13:02.389028    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0610 10:13:02.389082    3858 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.45367125s
	I0610 10:13:02.389114    3858 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0610 10:13:02.519341    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0610 10:13:02.519382    3858 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.583785625s
	I0610 10:13:02.519407    3858 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0610 10:13:03.244940    3858 start.go:364] acquiring machines lock for test-preload-558000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:13:03.245371    3858 start.go:368] acquired machines lock for "test-preload-558000" in 350.208µs
	I0610 10:13:03.245485    3858 start.go:93] Provisioning new machine with config: &{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:13:03.245731    3858 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:13:03.250356    3858 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:13:03.298099    3858 start.go:159] libmachine.API.Create for "test-preload-558000" (driver="qemu2")
	I0610 10:13:03.298138    3858 client.go:168] LocalClient.Create starting
	I0610 10:13:03.298252    3858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:13:03.298292    3858 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:03.298322    3858 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:03.298415    3858 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:13:03.298443    3858 main.go:141] libmachine: Decoding PEM data...
	I0610 10:13:03.298463    3858 main.go:141] libmachine: Parsing certificate...
	I0610 10:13:03.298902    3858 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:13:03.457680    3858 main.go:141] libmachine: Creating SSH key...
	I0610 10:13:03.818010    3858 main.go:141] libmachine: Creating Disk image...
	I0610 10:13:03.818023    3858 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:13:03.818267    3858 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:13:03.827828    3858 main.go:141] libmachine: STDOUT: 
	I0610 10:13:03.827842    3858 main.go:141] libmachine: STDERR: 
	I0610 10:13:03.827906    3858 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2 +20000M
	I0610 10:13:03.835247    3858 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:13:03.835269    3858 main.go:141] libmachine: STDERR: 
	I0610 10:13:03.835281    3858 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:13:03.835287    3858 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:13:03.835330    3858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c1:79:10:1b:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 10:13:03.836915    3858 main.go:141] libmachine: STDOUT: 
	I0610 10:13:03.836931    3858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:13:03.836945    3858 client.go:171] LocalClient.Create took 538.81025ms
	I0610 10:13:05.716812    3858 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0610 10:13:05.716858    3858 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.781308417s
	I0610 10:13:05.716884    3858 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0610 10:13:05.716968    3858 cache.go:87] Successfully saved all images to host disk.
	I0610 10:13:05.839130    3858 start.go:128] duration metric: createHost completed in 2.593418167s
	I0610 10:13:05.839289    3858 start.go:83] releasing machines lock for "test-preload-558000", held for 2.593924334s
	W0610 10:13:05.839594    3858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:13:05.849034    3858 out.go:177] 
	W0610 10:13:05.853088    3858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:13:05.853126    3858 out.go:239] * 
	* 
	W0610 10:13:05.855537    3858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:13:05.869981    3858 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-06-10 10:13:05.883451 -0700 PDT m=+3116.949472959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-558000 -n test-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-558000 -n test-preload-558000: exit status 7 (67.243459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-558000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-558000
--- FAIL: TestPreload (10.22s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-714000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-714000 --memory=2048 --driver=qemu2 : exit status 80 (9.88504525s)

                                                
                                                
-- stdout --
	* [scheduled-stop-714000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-714000 in cluster scheduled-stop-714000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-714000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-714000 in cluster scheduled-stop-714000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-06-10 10:13:15.928417 -0700 PDT m=+3126.994587751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-714000 -n scheduled-stop-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-714000 -n scheduled-stop-714000: exit status 7 (71.338792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-714000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-714000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (16.16s)

                                                
                                                
=== RUN   TestSkaffold
E0610 10:13:17.809970    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1003781678 version
skaffold_test.go:63: skaffold version: v2.5.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-720000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-720000 --memory=2600 --driver=qemu2 : exit status 80 (9.826274167s)

                                                
                                                
-- stdout --
	* [skaffold-720000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-720000 in cluster skaffold-720000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-720000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-720000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-720000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-720000 in cluster skaffold-720000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-720000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-720000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-06-10 10:13:32.091892 -0700 PDT m=+3143.158303584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-720000 -n skaffold-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-720000 -n skaffold-720000: exit status 7 (63.334792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-720000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-720000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-720000
--- FAIL: TestSkaffold (16.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-10 10:19:01.906877 -0700 PDT m=+3472.981064168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-922000 -n running-upgrade-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-922000 -n running-upgrade-922000: exit status 85 (84.942834ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-922000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-922000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-922000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-922000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-922000\"")
helpers_test.go:175: Cleaning up "running-upgrade-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-922000
--- FAIL: TestRunningBinaryUpgrade (126.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.70020825s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-463000 in cluster kubernetes-upgrade-463000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:14:12.386832    4316 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:14:12.386955    4316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:12.386958    4316 out.go:309] Setting ErrFile to fd 2...
	I0610 10:14:12.386960    4316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:12.387034    4316 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:14:12.388031    4316 out.go:303] Setting JSON to false
	I0610 10:14:12.403116    4316 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4423,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:14:12.403169    4316 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:14:12.408554    4316 out.go:177] * [kubernetes-upgrade-463000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:14:12.416482    4316 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:14:12.420526    4316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:14:12.416521    4316 notify.go:220] Checking for updates...
	I0610 10:14:12.426504    4316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:14:12.429546    4316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:14:12.432523    4316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:14:12.435485    4316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:14:12.438992    4316 config.go:182] Loaded profile config "cert-expiration-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:14:12.439041    4316 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:14:12.443551    4316 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:14:12.450513    4316 start.go:297] selected driver: qemu2
	I0610 10:14:12.450520    4316 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:14:12.450530    4316 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:14:12.452384    4316 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:14:12.455490    4316 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:14:12.456940    4316 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:14:12.456953    4316 cni.go:84] Creating CNI manager for ""
	I0610 10:14:12.456959    4316 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 10:14:12.456962    4316 start_flags.go:319] config:
	{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:14:12.457036    4316 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:14:12.464526    4316 out.go:177] * Starting control plane node kubernetes-upgrade-463000 in cluster kubernetes-upgrade-463000
	I0610 10:14:12.468468    4316 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 10:14:12.468491    4316 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 10:14:12.468506    4316 cache.go:57] Caching tarball of preloaded images
	I0610 10:14:12.468572    4316 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:14:12.468585    4316 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 10:14:12.468651    4316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubernetes-upgrade-463000/config.json ...
	I0610 10:14:12.468662    4316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubernetes-upgrade-463000/config.json: {Name:mkec7aa5428bf565ff1e6d9dcf13def6f7526f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:14:12.468855    4316 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:14:12.468866    4316 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:12.468896    4316 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 24.292µs
	I0610 10:14:12.468907    4316 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:14:12.468935    4316 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:14:12.477470    4316 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:14:12.494627    4316 start.go:159] libmachine.API.Create for "kubernetes-upgrade-463000" (driver="qemu2")
	I0610 10:14:12.494651    4316 client.go:168] LocalClient.Create starting
	I0610 10:14:12.494718    4316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:14:12.494739    4316 main.go:141] libmachine: Decoding PEM data...
	I0610 10:14:12.494749    4316 main.go:141] libmachine: Parsing certificate...
	I0610 10:14:12.494795    4316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:14:12.494810    4316 main.go:141] libmachine: Decoding PEM data...
	I0610 10:14:12.494819    4316 main.go:141] libmachine: Parsing certificate...
	I0610 10:14:12.495165    4316 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:14:12.603823    4316 main.go:141] libmachine: Creating SSH key...
	I0610 10:14:12.693818    4316 main.go:141] libmachine: Creating Disk image...
	I0610 10:14:12.693828    4316 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:14:12.693980    4316 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:12.702799    4316 main.go:141] libmachine: STDOUT: 
	I0610 10:14:12.702818    4316 main.go:141] libmachine: STDERR: 
	I0610 10:14:12.702862    4316 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2 +20000M
	I0610 10:14:12.710037    4316 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:14:12.710060    4316 main.go:141] libmachine: STDERR: 
	I0610 10:14:12.710082    4316 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:12.710097    4316 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:14:12.710139    4316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b0:37:e3:6d:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:12.711667    4316 main.go:141] libmachine: STDOUT: 
	I0610 10:14:12.711683    4316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:12.711702    4316 client.go:171] LocalClient.Create took 217.074041ms
	I0610 10:14:14.713574    4316 start.go:128] duration metric: createHost completed in 2.244934834s
	I0610 10:14:14.713636    4316 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 2.245043958s
	W0610 10:14:14.713750    4316 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:14.722980    4316 out.go:177] * Deleting "kubernetes-upgrade-463000" in qemu2 ...
	W0610 10:14:14.742057    4316 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:14.742084    4316 start.go:702] Will try again in 5 seconds ...
	I0610 10:14:19.743840    4316 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:19.744489    4316 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 530.042µs
	I0610 10:14:19.744603    4316 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:14:19.744897    4316 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:14:19.754759    4316 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:14:19.802092    4316 start.go:159] libmachine.API.Create for "kubernetes-upgrade-463000" (driver="qemu2")
	I0610 10:14:19.802138    4316 client.go:168] LocalClient.Create starting
	I0610 10:14:19.802238    4316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:14:19.802281    4316 main.go:141] libmachine: Decoding PEM data...
	I0610 10:14:19.802304    4316 main.go:141] libmachine: Parsing certificate...
	I0610 10:14:19.802371    4316 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:14:19.802399    4316 main.go:141] libmachine: Decoding PEM data...
	I0610 10:14:19.802433    4316 main.go:141] libmachine: Parsing certificate...
	I0610 10:14:19.802928    4316 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:14:19.926113    4316 main.go:141] libmachine: Creating SSH key...
	I0610 10:14:19.996759    4316 main.go:141] libmachine: Creating Disk image...
	I0610 10:14:19.996764    4316 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:14:19.996903    4316 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:20.005433    4316 main.go:141] libmachine: STDOUT: 
	I0610 10:14:20.005447    4316 main.go:141] libmachine: STDERR: 
	I0610 10:14:20.005502    4316 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2 +20000M
	I0610 10:14:20.012585    4316 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:14:20.012598    4316 main.go:141] libmachine: STDERR: 
	I0610 10:14:20.012611    4316 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:20.012621    4316 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:14:20.012661    4316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bd:ce:ea:27:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:20.014155    4316 main.go:141] libmachine: STDOUT: 
	I0610 10:14:20.014169    4316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:20.014180    4316 client.go:171] LocalClient.Create took 212.058625ms
	I0610 10:14:22.016233    4316 start.go:128] duration metric: createHost completed in 2.271509916s
	I0610 10:14:22.016311    4316 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 2.272005125s
	W0610 10:14:22.016662    4316 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:22.027353    4316 out.go:177] 
	W0610 10:14:22.031496    4316 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:22.031551    4316 out.go:239] * 
	* 
	W0610 10:14:22.034468    4316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:14:22.044363    4316 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-463000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-463000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-463000 status --format={{.Host}}: exit status 7 (37.179625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.172703166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-463000 in cluster kubernetes-upgrade-463000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:14:22.227863    4335 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:14:22.227960    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:22.227964    4335 out.go:309] Setting ErrFile to fd 2...
	I0610 10:14:22.227966    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:22.228032    4335 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:14:22.229024    4335 out.go:303] Setting JSON to false
	I0610 10:14:22.244005    4335 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4433,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:14:22.244074    4335 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:14:22.249193    4335 out.go:177] * [kubernetes-upgrade-463000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:14:22.256205    4335 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:14:22.260180    4335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:14:22.256271    4335 notify.go:220] Checking for updates...
	I0610 10:14:22.266179    4335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:14:22.269177    4335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:14:22.272222    4335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:14:22.275233    4335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:14:22.278458    4335 config.go:182] Loaded profile config "kubernetes-upgrade-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 10:14:22.278707    4335 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:14:22.283159    4335 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:14:22.289134    4335 start.go:297] selected driver: qemu2
	I0610 10:14:22.289140    4335 start.go:875] validating driver "qemu2" against &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:14:22.289210    4335 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:14:22.291106    4335 cni.go:84] Creating CNI manager for ""
	I0610 10:14:22.291123    4335 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:14:22.291129    4335 start_flags.go:319] config:
	{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-463000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:14:22.291211    4335 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:14:22.299192    4335 out.go:177] * Starting control plane node kubernetes-upgrade-463000 in cluster kubernetes-upgrade-463000
	I0610 10:14:22.303165    4335 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:14:22.303185    4335 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:14:22.303195    4335 cache.go:57] Caching tarball of preloaded images
	I0610 10:14:22.303248    4335 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:14:22.303253    4335 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:14:22.303325    4335 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubernetes-upgrade-463000/config.json ...
	I0610 10:14:22.303686    4335 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:14:22.303696    4335 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:22.303726    4335 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 24.959µs
	I0610 10:14:22.303738    4335 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:14:22.303743    4335 fix.go:55] fixHost starting: 
	I0610 10:14:22.303864    4335 fix.go:103] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W0610 10:14:22.303872    4335 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:14:22.312129    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I0610 10:14:22.316184    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bd:ce:ea:27:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:22.318103    4335 main.go:141] libmachine: STDOUT: 
	I0610 10:14:22.318124    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:22.318157    4335 fix.go:57] fixHost completed within 14.414625ms
	I0610 10:14:22.318163    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 14.433125ms
	W0610 10:14:22.318170    4335 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:22.318218    4335 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:22.318223    4335 start.go:702] Will try again in 5 seconds ...
	I0610 10:14:27.320057    4335 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:27.320416    4335 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 271.834µs
	I0610 10:14:27.320555    4335 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:14:27.320574    4335 fix.go:55] fixHost starting: 
	I0610 10:14:27.321236    4335 fix.go:103] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W0610 10:14:27.321262    4335 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:14:27.325681    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I0610 10:14:27.329792    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bd:ce:ea:27:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:27.338122    4335 main.go:141] libmachine: STDOUT: 
	I0610 10:14:27.338179    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:27.338259    4335 fix.go:57] fixHost completed within 17.686792ms
	I0610 10:14:27.338277    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 17.839042ms
	W0610 10:14:27.338499    4335 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:27.347630    4335 out.go:177] 
	W0610 10:14:27.351659    4335 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:27.351675    4335 out.go:239] * 
	* 
	W0610 10:14:27.353653    4335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:14:27.360603    4335 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-463000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-463000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-463000 version --output=json: exit status 1 (61.059416ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-463000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-06-10 10:14:27.435212 -0700 PDT m=+3198.504336793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-463000 -n kubernetes-upgrade-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-463000 -n kubernetes-upgrade-463000: exit status 7 (32.317625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-463000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-463000
--- FAIL: TestKubernetesUpgrade (15.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16578
- KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4220253196/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.55s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16578
- KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2692663237/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (145.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0610 10:15:09.469837    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:16:32.538079    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (145.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe: permission denied (7.228166ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe: permission denied (6.616375ms)
E0610 10:16:54.733799    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe start -p stopped-upgrade-365000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe: permission denied (6.562209ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.17881737.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-365000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-365000: exit status 85 (115.72275ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-558000                               | test-preload-558000       | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p scheduled-stop-714000                             | scheduled-stop-714000     | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --memory=2048 --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-714000                             | scheduled-stop-714000     | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p skaffold-720000                                   | skaffold-720000           | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --memory=2600 --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p skaffold-720000                                   | skaffold-720000           | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p offline-docker-407000                             | offline-docker-407000     | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true                            |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo crictl                         | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo crictl                         | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo find                           | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo ip a s                         | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	| ssh     | -p cilium-472000 sudo ip r s                         | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo iptables                       | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo docker                         | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo cat                            | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo                                | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo find                           | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-472000 sudo crio                           | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-472000                                     | cilium-472000             | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| delete  | -p offline-docker-407000                             | offline-docker-407000     | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p force-systemd-env-535000                          | force-systemd-env-535000  | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-177000                         | force-systemd-flag-177000 | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-535000                             | force-systemd-env-535000  | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-535000                          | force-systemd-env-535000  | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p docker-flags-821000                               | docker-flags-821000       | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --cache-images=false                                 |                           |         |         |                     |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=false                                         |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                                 |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                                 |                           |         |         |                     |                     |
	|         | --docker-opt=debug                                   |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                                |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-177000                            | force-systemd-flag-177000 | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-177000                         | force-systemd-flag-177000 | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT | 10 Jun 23 10:13 PDT |
	| start   | -p cert-expiration-841000                            | cert-expiration-841000    | jenkins | v1.30.1 | 10 Jun 23 10:13 PDT |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | docker-flags-821000 ssh                              | docker-flags-821000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | sudo systemctl show docker                           |                           |         |         |                     |                     |
	|         | --property=Environment                               |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | docker-flags-821000 ssh                              | docker-flags-821000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | sudo systemctl show docker                           |                           |         |         |                     |                     |
	|         | --property=ExecStart                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| delete  | -p docker-flags-821000                               | docker-flags-821000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT | 10 Jun 23 10:14 PDT |
	| start   | -p cert-options-834000                               | cert-options-834000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | cert-options-834000 ssh                              | cert-options-834000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-834000 -- sudo                       | cert-options-834000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-834000                               | cert-options-834000       | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT | 10 Jun 23 10:14 PDT |
	| start   | -p kubernetes-upgrade-463000                         | kubernetes-upgrade-463000 | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-463000                         | kubernetes-upgrade-463000 | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT | 10 Jun 23 10:14 PDT |
	| start   | -p kubernetes-upgrade-463000                         | kubernetes-upgrade-463000 | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-463000                         | kubernetes-upgrade-463000 | jenkins | v1.30.1 | 10 Jun 23 10:14 PDT | 10 Jun 23 10:14 PDT |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 10:14:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:14:22.227863    4335 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:14:22.227960    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:22.227964    4335 out.go:309] Setting ErrFile to fd 2...
	I0610 10:14:22.227966    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:14:22.228032    4335 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:14:22.229024    4335 out.go:303] Setting JSON to false
	I0610 10:14:22.244005    4335 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4433,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:14:22.244074    4335 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:14:22.249193    4335 out.go:177] * [kubernetes-upgrade-463000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:14:22.256205    4335 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:14:22.260180    4335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:14:22.256271    4335 notify.go:220] Checking for updates...
	I0610 10:14:22.266179    4335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:14:22.269177    4335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:14:22.272222    4335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:14:22.275233    4335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:14:22.278458    4335 config.go:182] Loaded profile config "kubernetes-upgrade-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 10:14:22.278707    4335 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:14:22.283159    4335 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:14:22.289134    4335 start.go:297] selected driver: qemu2
	I0610 10:14:22.289140    4335 start.go:875] validating driver "qemu2" against &{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-463000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:14:22.289210    4335 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:14:22.291106    4335 cni.go:84] Creating CNI manager for ""
	I0610 10:14:22.291123    4335 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:14:22.291129    4335 start_flags.go:319] config:
	{Name:kubernetes-upgrade-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-463000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:14:22.291211    4335 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:14:22.299192    4335 out.go:177] * Starting control plane node kubernetes-upgrade-463000 in cluster kubernetes-upgrade-463000
	I0610 10:14:22.303165    4335 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:14:22.303185    4335 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:14:22.303195    4335 cache.go:57] Caching tarball of preloaded images
	I0610 10:14:22.303248    4335 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:14:22.303253    4335 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:14:22.303325    4335 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubernetes-upgrade-463000/config.json ...
	I0610 10:14:22.303686    4335 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:14:22.303696    4335 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:22.303726    4335 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 24.959µs
	I0610 10:14:22.303738    4335 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:14:22.303743    4335 fix.go:55] fixHost starting: 
	I0610 10:14:22.303864    4335 fix.go:103] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W0610 10:14:22.303872    4335 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:14:22.312129    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I0610 10:14:22.316184    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bd:ce:ea:27:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:22.318103    4335 main.go:141] libmachine: STDOUT: 
	I0610 10:14:22.318124    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:22.318157    4335 fix.go:57] fixHost completed within 14.414625ms
	I0610 10:14:22.318163    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 14.433125ms
	W0610 10:14:22.318170    4335 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:22.318218    4335 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:22.318223    4335 start.go:702] Will try again in 5 seconds ...
	I0610 10:14:27.320057    4335 start.go:364] acquiring machines lock for kubernetes-upgrade-463000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:14:27.320416    4335 start.go:368] acquired machines lock for "kubernetes-upgrade-463000" in 271.834µs
	I0610 10:14:27.320555    4335 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:14:27.320574    4335 fix.go:55] fixHost starting: 
	I0610 10:14:27.321236    4335 fix.go:103] recreateIfNeeded on kubernetes-upgrade-463000: state=Stopped err=<nil>
	W0610 10:14:27.321262    4335 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:14:27.325681    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-463000" ...
	I0610 10:14:27.329792    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bd:ce:ea:27:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubernetes-upgrade-463000/disk.qcow2
	I0610 10:14:27.338122    4335 main.go:141] libmachine: STDOUT: 
	I0610 10:14:27.338179    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:14:27.338259    4335 fix.go:57] fixHost completed within 17.686792ms
	I0610 10:14:27.338277    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-463000", held for 17.839042ms
	W0610 10:14:27.338499    4335 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:14:27.347630    4335 out.go:177] 
	W0610 10:14:27.351659    4335 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:14:27.351675    4335 out.go:239] * 
	W0610 10:14:27.353653    4335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:14:27.360603    4335 out.go:177] 
	
	* 
	* Profile "stopped-upgrade-365000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-365000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 : exit status 80 (9.658488667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-009000 in cluster NoKubernetes-009000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000: exit status 7 (72.770417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 : exit status 80 (5.397103584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-009000
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000: exit status 7 (70.494125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 : exit status 80 (5.402792083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-009000
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000: exit status 7 (67.800958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 : exit status 80 (5.393797125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-009000
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-009000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-009000 -n NoKubernetes-009000: exit status 7 (68.58925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                    
x
+
TestPause/serial/Start (9.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-880000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-880000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.724265333s)

                                                
                                                
-- stdout --
	* [pause-880000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-880000 in cluster pause-880000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-880000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-880000 -n pause-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-880000 -n pause-880000: exit status 7 (68.8625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.709583041s)

                                                
                                                
-- stdout --
	* [auto-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-472000 in cluster auto-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:17:49.116518    4518 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:17:49.116651    4518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:17:49.116654    4518 out.go:309] Setting ErrFile to fd 2...
	I0610 10:17:49.116657    4518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:17:49.116723    4518 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:17:49.117758    4518 out.go:303] Setting JSON to false
	I0610 10:17:49.132962    4518 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4640,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:17:49.133035    4518 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:17:49.138506    4518 out.go:177] * [auto-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:17:49.146468    4518 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:17:49.146532    4518 notify.go:220] Checking for updates...
	I0610 10:17:49.151866    4518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:17:49.154522    4518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:17:49.157484    4518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:17:49.160486    4518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:17:49.163449    4518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:17:49.166646    4518 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:17:49.170616    4518 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:17:49.177433    4518 start.go:297] selected driver: qemu2
	I0610 10:17:49.177438    4518 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:17:49.177447    4518 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:17:49.179242    4518 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:17:49.182454    4518 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:17:49.185544    4518 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:17:49.185565    4518 cni.go:84] Creating CNI manager for ""
	I0610 10:17:49.185572    4518 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:17:49.185577    4518 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:17:49.185588    4518 start_flags.go:319] config:
	{Name:auto-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:auto-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:17:49.185673    4518 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:17:49.196424    4518 out.go:177] * Starting control plane node auto-472000 in cluster auto-472000
	I0610 10:17:49.200463    4518 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:17:49.200492    4518 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:17:49.200505    4518 cache.go:57] Caching tarball of preloaded images
	I0610 10:17:49.200571    4518 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:17:49.200576    4518 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:17:49.200794    4518 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/auto-472000/config.json ...
	I0610 10:17:49.200814    4518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/auto-472000/config.json: {Name:mk4c8b1c6950bbcf86750bb00349718b9fbda9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:17:49.201030    4518 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:17:49.201043    4518 start.go:364] acquiring machines lock for auto-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:17:49.201074    4518 start.go:368] acquired machines lock for "auto-472000" in 26µs
	I0610 10:17:49.201088    4518 start.go:93] Provisioning new machine with config: &{Name:auto-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:17:49.201113    4518 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:17:49.209444    4518 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:17:49.227053    4518 start.go:159] libmachine.API.Create for "auto-472000" (driver="qemu2")
	I0610 10:17:49.227079    4518 client.go:168] LocalClient.Create starting
	I0610 10:17:49.227135    4518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:17:49.227154    4518 main.go:141] libmachine: Decoding PEM data...
	I0610 10:17:49.227168    4518 main.go:141] libmachine: Parsing certificate...
	I0610 10:17:49.227224    4518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:17:49.227239    4518 main.go:141] libmachine: Decoding PEM data...
	I0610 10:17:49.227248    4518 main.go:141] libmachine: Parsing certificate...
	I0610 10:17:49.227577    4518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:17:49.336711    4518 main.go:141] libmachine: Creating SSH key...
	I0610 10:17:49.431768    4518 main.go:141] libmachine: Creating Disk image...
	I0610 10:17:49.431775    4518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:17:49.431936    4518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:49.440613    4518 main.go:141] libmachine: STDOUT: 
	I0610 10:17:49.440626    4518 main.go:141] libmachine: STDERR: 
	I0610 10:17:49.440677    4518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2 +20000M
	I0610 10:17:49.447806    4518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:17:49.447817    4518 main.go:141] libmachine: STDERR: 
	I0610 10:17:49.447833    4518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:49.447838    4518 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:17:49.447872    4518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:1d:41:f1:53:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:49.449394    4518 main.go:141] libmachine: STDOUT: 
	I0610 10:17:49.449409    4518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:17:49.449433    4518 client.go:171] LocalClient.Create took 222.351959ms
	I0610 10:17:51.451567    4518 start.go:128] duration metric: createHost completed in 2.25047125s
	I0610 10:17:51.451633    4518 start.go:83] releasing machines lock for "auto-472000", held for 2.25058375s
	W0610 10:17:51.451725    4518 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:17:51.462955    4518 out.go:177] * Deleting "auto-472000" in qemu2 ...
	W0610 10:17:51.483332    4518 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:17:51.483358    4518 start.go:702] Will try again in 5 seconds ...
	I0610 10:17:56.485560    4518 start.go:364] acquiring machines lock for auto-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:17:56.485979    4518 start.go:368] acquired machines lock for "auto-472000" in 321.667µs
	I0610 10:17:56.486111    4518 start.go:93] Provisioning new machine with config: &{Name:auto-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:17:56.486424    4518 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:17:56.496144    4518 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:17:56.543534    4518 start.go:159] libmachine.API.Create for "auto-472000" (driver="qemu2")
	I0610 10:17:56.543600    4518 client.go:168] LocalClient.Create starting
	I0610 10:17:56.543727    4518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:17:56.543773    4518 main.go:141] libmachine: Decoding PEM data...
	I0610 10:17:56.543801    4518 main.go:141] libmachine: Parsing certificate...
	I0610 10:17:56.543884    4518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:17:56.543913    4518 main.go:141] libmachine: Decoding PEM data...
	I0610 10:17:56.543930    4518 main.go:141] libmachine: Parsing certificate...
	I0610 10:17:56.544498    4518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:17:56.663029    4518 main.go:141] libmachine: Creating SSH key...
	I0610 10:17:56.741370    4518 main.go:141] libmachine: Creating Disk image...
	I0610 10:17:56.741376    4518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:17:56.741518    4518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:56.750202    4518 main.go:141] libmachine: STDOUT: 
	I0610 10:17:56.750214    4518 main.go:141] libmachine: STDERR: 
	I0610 10:17:56.750285    4518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2 +20000M
	I0610 10:17:56.757419    4518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:17:56.757441    4518 main.go:141] libmachine: STDERR: 
	I0610 10:17:56.757458    4518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:56.757464    4518 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:17:56.757511    4518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:07:ce:52:00:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/auto-472000/disk.qcow2
	I0610 10:17:56.759049    4518 main.go:141] libmachine: STDOUT: 
	I0610 10:17:56.759060    4518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:17:56.759074    4518 client.go:171] LocalClient.Create took 215.472417ms
	I0610 10:17:58.761243    4518 start.go:128] duration metric: createHost completed in 2.274808209s
	I0610 10:17:58.761342    4518 start.go:83] releasing machines lock for "auto-472000", held for 2.2753715s
	W0610 10:17:58.761824    4518 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:17:58.770287    4518 out.go:177] 
	W0610 10:17:58.773441    4518 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:17:58.773476    4518 out.go:239] * 
	* 
	W0610 10:17:58.776281    4518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:17:58.784311    4518 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.733076292s)

                                                
                                                
-- stdout --
	* [kindnet-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-472000 in cluster kindnet-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:18:00.929969    4629 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:18:00.930088    4629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:00.930091    4629 out.go:309] Setting ErrFile to fd 2...
	I0610 10:18:00.930094    4629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:00.930157    4629 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:18:00.931187    4629 out.go:303] Setting JSON to false
	I0610 10:18:00.946155    4629 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4651,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:18:00.946227    4629 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:18:00.949846    4629 out.go:177] * [kindnet-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:18:00.956873    4629 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:18:00.956917    4629 notify.go:220] Checking for updates...
	I0610 10:18:00.963856    4629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:18:00.966876    4629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:18:00.969818    4629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:18:00.972837    4629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:18:00.975839    4629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:18:00.977368    4629 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:18:00.981843    4629 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:18:00.988666    4629 start.go:297] selected driver: qemu2
	I0610 10:18:00.988673    4629 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:18:00.988681    4629 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:18:00.990466    4629 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:18:00.993818    4629 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:18:00.996916    4629 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:18:00.996930    4629 cni.go:84] Creating CNI manager for "kindnet"
	I0610 10:18:00.996935    4629 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 10:18:00.996943    4629 start_flags.go:319] config:
	{Name:kindnet-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:18:00.997025    4629 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:18:01.000865    4629 out.go:177] * Starting control plane node kindnet-472000 in cluster kindnet-472000
	I0610 10:18:01.008860    4629 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:18:01.008883    4629 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:18:01.008901    4629 cache.go:57] Caching tarball of preloaded images
	I0610 10:18:01.008964    4629 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:18:01.008970    4629 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:18:01.009173    4629 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kindnet-472000/config.json ...
	I0610 10:18:01.009185    4629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kindnet-472000/config.json: {Name:mkd512fd2764273dd3fb84d2d8c8eec7ad7c6888 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:18:01.009382    4629 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:18:01.009395    4629 start.go:364] acquiring machines lock for kindnet-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:01.009424    4629 start.go:368] acquired machines lock for "kindnet-472000" in 24.416µs
	I0610 10:18:01.009435    4629 start.go:93] Provisioning new machine with config: &{Name:kindnet-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:01.009460    4629 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:01.017867    4629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:01.034808    4629 start.go:159] libmachine.API.Create for "kindnet-472000" (driver="qemu2")
	I0610 10:18:01.034824    4629 client.go:168] LocalClient.Create starting
	I0610 10:18:01.034886    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:01.034905    4629 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:01.034917    4629 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:01.034952    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:01.034969    4629 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:01.034976    4629 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:01.035281    4629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:01.145451    4629 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:01.275618    4629 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:01.275625    4629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:01.275782    4629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:01.284636    4629 main.go:141] libmachine: STDOUT: 
	I0610 10:18:01.284650    4629 main.go:141] libmachine: STDERR: 
	I0610 10:18:01.284706    4629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2 +20000M
	I0610 10:18:01.291814    4629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:01.291827    4629 main.go:141] libmachine: STDERR: 
	I0610 10:18:01.291849    4629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:01.291854    4629 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:01.291889    4629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:aa:54:3c:06:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:01.293417    4629 main.go:141] libmachine: STDOUT: 
	I0610 10:18:01.293429    4629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:01.293449    4629 client.go:171] LocalClient.Create took 258.623916ms
	I0610 10:18:03.295576    4629 start.go:128] duration metric: createHost completed in 2.286130125s
	I0610 10:18:03.295688    4629 start.go:83] releasing machines lock for "kindnet-472000", held for 2.286290042s
	W0610 10:18:03.295754    4629 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:03.303475    4629 out.go:177] * Deleting "kindnet-472000" in qemu2 ...
	W0610 10:18:03.323523    4629 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:03.323549    4629 start.go:702] Will try again in 5 seconds ...
	I0610 10:18:08.325003    4629 start.go:364] acquiring machines lock for kindnet-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:08.325579    4629 start.go:368] acquired machines lock for "kindnet-472000" in 451.209µs
	I0610 10:18:08.325778    4629 start.go:93] Provisioning new machine with config: &{Name:kindnet-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:08.326050    4629 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:08.333746    4629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:08.383061    4629 start.go:159] libmachine.API.Create for "kindnet-472000" (driver="qemu2")
	I0610 10:18:08.383100    4629 client.go:168] LocalClient.Create starting
	I0610 10:18:08.383246    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:08.383295    4629 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:08.383317    4629 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:08.383418    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:08.383449    4629 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:08.383467    4629 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:08.384070    4629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:08.504731    4629 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:08.575294    4629 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:08.575299    4629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:08.575439    4629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:08.584082    4629 main.go:141] libmachine: STDOUT: 
	I0610 10:18:08.584101    4629 main.go:141] libmachine: STDERR: 
	I0610 10:18:08.584158    4629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2 +20000M
	I0610 10:18:08.591306    4629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:08.591317    4629 main.go:141] libmachine: STDERR: 
	I0610 10:18:08.591334    4629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:08.591341    4629 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:08.591381    4629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b0:c1:05:0d:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kindnet-472000/disk.qcow2
	I0610 10:18:08.592868    4629 main.go:141] libmachine: STDOUT: 
	I0610 10:18:08.592879    4629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:08.592893    4629 client.go:171] LocalClient.Create took 209.792292ms
	I0610 10:18:10.595022    4629 start.go:128] duration metric: createHost completed in 2.268956166s
	I0610 10:18:10.595211    4629 start.go:83] releasing machines lock for "kindnet-472000", held for 2.269483375s
	W0610 10:18:10.595561    4629 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:10.604427    4629 out.go:177] 
	W0610 10:18:10.608416    4629 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:18:10.608444    4629 out.go:239] * 
	* 
	W0610 10:18:10.611077    4629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:18:10.621363    4629 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.713758333s)

                                                
                                                
-- stdout --
	* [calico-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-472000 in cluster calico-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:18:12.873023    4747 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:18:12.873162    4747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:12.873165    4747 out.go:309] Setting ErrFile to fd 2...
	I0610 10:18:12.873167    4747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:12.873238    4747 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:18:12.874251    4747 out.go:303] Setting JSON to false
	I0610 10:18:12.889302    4747 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4663,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:18:12.889379    4747 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:18:12.893097    4747 out.go:177] * [calico-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:18:12.900074    4747 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:18:12.904047    4747 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:18:12.900153    4747 notify.go:220] Checking for updates...
	I0610 10:18:12.907055    4747 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:18:12.909995    4747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:18:12.913006    4747 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:18:12.916044    4747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:18:12.917577    4747 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:18:12.921999    4747 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:18:12.928905    4747 start.go:297] selected driver: qemu2
	I0610 10:18:12.928910    4747 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:18:12.928919    4747 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:18:12.930720    4747 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:18:12.934012    4747 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:18:12.937144    4747 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:18:12.937164    4747 cni.go:84] Creating CNI manager for "calico"
	I0610 10:18:12.937175    4747 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0610 10:18:12.937181    4747 start_flags.go:319] config:
	{Name:calico-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:calico-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:18:12.937262    4747 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:18:12.944992    4747 out.go:177] * Starting control plane node calico-472000 in cluster calico-472000
	I0610 10:18:12.949085    4747 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:18:12.949104    4747 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:18:12.949119    4747 cache.go:57] Caching tarball of preloaded images
	I0610 10:18:12.949175    4747 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:18:12.949181    4747 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:18:12.949381    4747 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/calico-472000/config.json ...
	I0610 10:18:12.949392    4747 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/calico-472000/config.json: {Name:mk5ade0dd6bff523006b477048f9c364fc9a1f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:18:12.949587    4747 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:18:12.949597    4747 start.go:364] acquiring machines lock for calico-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:12.949624    4747 start.go:368] acquired machines lock for "calico-472000" in 23.083µs
	I0610 10:18:12.949635    4747 start.go:93] Provisioning new machine with config: &{Name:calico-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:12.949657    4747 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:12.957051    4747 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:12.973252    4747 start.go:159] libmachine.API.Create for "calico-472000" (driver="qemu2")
	I0610 10:18:12.973269    4747 client.go:168] LocalClient.Create starting
	I0610 10:18:12.973324    4747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:12.973343    4747 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:12.973352    4747 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:12.973383    4747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:12.973396    4747 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:12.973402    4747 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:12.973692    4747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:13.080054    4747 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:13.126709    4747 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:13.126714    4747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:13.126848    4747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:13.135390    4747 main.go:141] libmachine: STDOUT: 
	I0610 10:18:13.135402    4747 main.go:141] libmachine: STDERR: 
	I0610 10:18:13.135451    4747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2 +20000M
	I0610 10:18:13.142592    4747 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:13.142605    4747 main.go:141] libmachine: STDERR: 
	I0610 10:18:13.142624    4747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:13.142629    4747 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:13.142667    4747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:c7:bd:a9:5c:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:13.144225    4747 main.go:141] libmachine: STDOUT: 
	I0610 10:18:13.144237    4747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:13.144255    4747 client.go:171] LocalClient.Create took 170.982875ms
	I0610 10:18:15.146456    4747 start.go:128] duration metric: createHost completed in 2.196800458s
	I0610 10:18:15.146539    4747 start.go:83] releasing machines lock for "calico-472000", held for 2.196936917s
	W0610 10:18:15.146600    4747 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:15.157608    4747 out.go:177] * Deleting "calico-472000" in qemu2 ...
	W0610 10:18:15.177561    4747 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:15.177589    4747 start.go:702] Will try again in 5 seconds ...
	I0610 10:18:20.179771    4747 start.go:364] acquiring machines lock for calico-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:20.180489    4747 start.go:368] acquired machines lock for "calico-472000" in 583.541µs
	I0610 10:18:20.180609    4747 start.go:93] Provisioning new machine with config: &{Name:calico-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:20.180872    4747 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:20.190922    4747 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:20.240479    4747 start.go:159] libmachine.API.Create for "calico-472000" (driver="qemu2")
	I0610 10:18:20.240528    4747 client.go:168] LocalClient.Create starting
	I0610 10:18:20.240641    4747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:20.240684    4747 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:20.240700    4747 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:20.240785    4747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:20.240822    4747 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:20.240842    4747 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:20.241399    4747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:20.356918    4747 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:20.498686    4747 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:20.498695    4747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:20.498902    4747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:20.507921    4747 main.go:141] libmachine: STDOUT: 
	I0610 10:18:20.507941    4747 main.go:141] libmachine: STDERR: 
	I0610 10:18:20.508001    4747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2 +20000M
	I0610 10:18:20.515269    4747 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:20.515281    4747 main.go:141] libmachine: STDERR: 
	I0610 10:18:20.515300    4747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:20.515306    4747 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:20.515347    4747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9a:33:b9:a1:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/calico-472000/disk.qcow2
	I0610 10:18:20.516869    4747 main.go:141] libmachine: STDOUT: 
	I0610 10:18:20.516881    4747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:20.516892    4747 client.go:171] LocalClient.Create took 276.36375ms
	I0610 10:18:22.519021    4747 start.go:128] duration metric: createHost completed in 2.338162458s
	I0610 10:18:22.519084    4747 start.go:83] releasing machines lock for "calico-472000", held for 2.338604583s
	W0610 10:18:22.519504    4747 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:22.529159    4747 out.go:177] 
	W0610 10:18:22.533255    4747 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:18:22.533281    4747 out.go:239] * 
	* 
	W0610 10:18:22.536173    4747 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:18:22.545212    4747 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.837369833s)

                                                
                                                
-- stdout --
	* [custom-flannel-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-472000 in cluster custom-flannel-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:18:24.929675    4864 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:18:24.929806    4864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:24.929809    4864 out.go:309] Setting ErrFile to fd 2...
	I0610 10:18:24.929811    4864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:24.929880    4864 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:18:24.930905    4864 out.go:303] Setting JSON to false
	I0610 10:18:24.945807    4864 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4675,"bootTime":1686412829,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:18:24.945886    4864 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:18:24.950827    4864 out.go:177] * [custom-flannel-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:18:24.957905    4864 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:18:24.957979    4864 notify.go:220] Checking for updates...
	I0610 10:18:24.961814    4864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:18:24.964826    4864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:18:24.967831    4864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:18:24.970801    4864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:18:24.973761    4864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:18:24.977044    4864 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:18:24.980766    4864 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:18:24.987811    4864 start.go:297] selected driver: qemu2
	I0610 10:18:24.987816    4864 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:18:24.987825    4864 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:18:24.989590    4864 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:18:24.992731    4864 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:18:24.995867    4864 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:18:24.995884    4864 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0610 10:18:24.995902    4864 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0610 10:18:24.995908    4864 start_flags.go:319] config:
	{Name:custom-flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0610 10:18:24.995995    4864 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:18:24.999824    4864 out.go:177] * Starting control plane node custom-flannel-472000 in cluster custom-flannel-472000
	I0610 10:18:25.007810    4864 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:18:25.007837    4864 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:18:25.007853    4864 cache.go:57] Caching tarball of preloaded images
	I0610 10:18:25.007913    4864 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:18:25.007926    4864 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:18:25.008149    4864 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/custom-flannel-472000/config.json ...
	I0610 10:18:25.008164    4864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/custom-flannel-472000/config.json: {Name:mk1db460efec16a0d383e68fbf96dd3dc8190ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:18:25.008382    4864 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:18:25.008393    4864 start.go:364] acquiring machines lock for custom-flannel-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:25.008424    4864 start.go:368] acquired machines lock for "custom-flannel-472000" in 25.208µs
	I0610 10:18:25.008436    4864 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:25.008461    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:25.015823    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:25.032335    4864 start.go:159] libmachine.API.Create for "custom-flannel-472000" (driver="qemu2")
	I0610 10:18:25.032363    4864 client.go:168] LocalClient.Create starting
	I0610 10:18:25.032423    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:25.032443    4864 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:25.032454    4864 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:25.032501    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:25.032519    4864 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:25.032530    4864 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:25.032871    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:25.140571    4864 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:25.362462    4864 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:25.362469    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:25.362658    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:25.372023    4864 main.go:141] libmachine: STDOUT: 
	I0610 10:18:25.372049    4864 main.go:141] libmachine: STDERR: 
	I0610 10:18:25.372121    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2 +20000M
	I0610 10:18:25.379256    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:25.379269    4864 main.go:141] libmachine: STDERR: 
	I0610 10:18:25.379293    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:25.379299    4864 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:25.379334    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e6:40:f9:55:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:25.380876    4864 main.go:141] libmachine: STDOUT: 
	I0610 10:18:25.380890    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:25.380909    4864 client.go:171] LocalClient.Create took 348.545458ms
	I0610 10:18:27.383096    4864 start.go:128] duration metric: createHost completed in 2.374653s
	I0610 10:18:27.383147    4864 start.go:83] releasing machines lock for "custom-flannel-472000", held for 2.374752208s
	W0610 10:18:27.383210    4864 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:27.391553    4864 out.go:177] * Deleting "custom-flannel-472000" in qemu2 ...
	W0610 10:18:27.415624    4864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:27.415651    4864 start.go:702] Will try again in 5 seconds ...
	I0610 10:18:32.417837    4864 start.go:364] acquiring machines lock for custom-flannel-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:32.418388    4864 start.go:368] acquired machines lock for "custom-flannel-472000" in 433.667µs
	I0610 10:18:32.418543    4864 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:32.418857    4864 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:32.429595    4864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:32.475617    4864 start.go:159] libmachine.API.Create for "custom-flannel-472000" (driver="qemu2")
	I0610 10:18:32.475646    4864 client.go:168] LocalClient.Create starting
	I0610 10:18:32.475761    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:32.475797    4864 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:32.475822    4864 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:32.475904    4864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:32.475931    4864 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:32.475948    4864 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:32.476507    4864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:32.593363    4864 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:32.680021    4864 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:32.680031    4864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:32.680174    4864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:32.688661    4864 main.go:141] libmachine: STDOUT: 
	I0610 10:18:32.688675    4864 main.go:141] libmachine: STDERR: 
	I0610 10:18:32.688732    4864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2 +20000M
	I0610 10:18:32.695778    4864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:32.695789    4864 main.go:141] libmachine: STDERR: 
	I0610 10:18:32.695805    4864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:32.695819    4864 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:32.695861    4864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d5:9b:c2:6c:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/custom-flannel-472000/disk.qcow2
	I0610 10:18:32.697650    4864 main.go:141] libmachine: STDOUT: 
	I0610 10:18:32.697678    4864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:32.697760    4864 client.go:171] LocalClient.Create took 222.108958ms
	I0610 10:18:34.700021    4864 start.go:128] duration metric: createHost completed in 2.281144959s
	I0610 10:18:34.700090    4864 start.go:83] releasing machines lock for "custom-flannel-472000", held for 2.281712458s
	W0610 10:18:34.700680    4864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:34.709337    4864 out.go:177] 
	W0610 10:18:34.713295    4864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:18:34.713319    4864 out.go:239] * 
	* 
	W0610 10:18:34.715855    4864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:18:34.725271    4864 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p false-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0610 10:18:39.642690    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.785948917s)

                                                
                                                
-- stdout --
	* [false-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-472000 in cluster false-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:18:37.117846    4983 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:18:37.117990    4983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:37.117993    4983 out.go:309] Setting ErrFile to fd 2...
	I0610 10:18:37.117995    4983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:37.118063    4983 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:18:37.119049    4983 out.go:303] Setting JSON to false
	I0610 10:18:37.134068    4983 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4688,"bootTime":1686412829,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:18:37.134130    4983 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:18:37.138779    4983 out.go:177] * [false-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:18:37.141707    4983 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:18:37.141780    4983 notify.go:220] Checking for updates...
	I0610 10:18:37.145691    4983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:18:37.149689    4983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:18:37.152713    4983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:18:37.155655    4983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:18:37.158674    4983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:18:37.161807    4983 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:18:37.165637    4983 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:18:37.172662    4983 start.go:297] selected driver: qemu2
	I0610 10:18:37.172675    4983 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:18:37.172685    4983 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:18:37.174406    4983 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:18:37.177675    4983 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:18:37.180781    4983 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:18:37.180801    4983 cni.go:84] Creating CNI manager for "false"
	I0610 10:18:37.180805    4983 start_flags.go:319] config:
	{Name:false-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:18:37.180894    4983 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:18:37.188637    4983 out.go:177] * Starting control plane node false-472000 in cluster false-472000
	I0610 10:18:37.192470    4983 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:18:37.192496    4983 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:18:37.192508    4983 cache.go:57] Caching tarball of preloaded images
	I0610 10:18:37.192571    4983 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:18:37.192575    4983 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:18:37.193207    4983 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/false-472000/config.json ...
	I0610 10:18:37.193233    4983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/false-472000/config.json: {Name:mk88bf075c0caacce7ea9cf30573fc4c0ef4d08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:18:37.193448    4983 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:18:37.193466    4983 start.go:364] acquiring machines lock for false-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:37.193604    4983 start.go:368] acquired machines lock for "false-472000" in 125.125µs
	I0610 10:18:37.193620    4983 start.go:93] Provisioning new machine with config: &{Name:false-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:37.193654    4983 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:37.201518    4983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:37.217996    4983 start.go:159] libmachine.API.Create for "false-472000" (driver="qemu2")
	I0610 10:18:37.218024    4983 client.go:168] LocalClient.Create starting
	I0610 10:18:37.218097    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:37.218116    4983 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:37.218127    4983 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:37.218175    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:37.218190    4983 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:37.218197    4983 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:37.218557    4983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:37.324418    4983 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:37.515033    4983 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:37.515039    4983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:37.515199    4983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:37.524282    4983 main.go:141] libmachine: STDOUT: 
	I0610 10:18:37.524300    4983 main.go:141] libmachine: STDERR: 
	I0610 10:18:37.524367    4983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2 +20000M
	I0610 10:18:37.531645    4983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:37.531662    4983 main.go:141] libmachine: STDERR: 
	I0610 10:18:37.531677    4983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:37.531682    4983 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:37.531718    4983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:17:0a:e8:a0:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:37.533181    4983 main.go:141] libmachine: STDOUT: 
	I0610 10:18:37.533196    4983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:37.533218    4983 client.go:171] LocalClient.Create took 315.193667ms
	I0610 10:18:39.535421    4983 start.go:128] duration metric: createHost completed in 2.341775083s
	I0610 10:18:39.535495    4983 start.go:83] releasing machines lock for "false-472000", held for 2.341914625s
	W0610 10:18:39.535563    4983 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:39.543919    4983 out.go:177] * Deleting "false-472000" in qemu2 ...
	W0610 10:18:39.563820    4983 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:39.563854    4983 start.go:702] Will try again in 5 seconds ...
	I0610 10:18:44.566013    4983 start.go:364] acquiring machines lock for false-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:44.566510    4983 start.go:368] acquired machines lock for "false-472000" in 398.834µs
	I0610 10:18:44.566627    4983 start.go:93] Provisioning new machine with config: &{Name:false-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:44.566884    4983 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:44.575566    4983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:44.625289    4983 start.go:159] libmachine.API.Create for "false-472000" (driver="qemu2")
	I0610 10:18:44.625329    4983 client.go:168] LocalClient.Create starting
	I0610 10:18:44.625483    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:44.625527    4983 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:44.625548    4983 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:44.625629    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:44.625660    4983 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:44.625673    4983 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:44.626220    4983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:44.747941    4983 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:44.812180    4983 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:44.812186    4983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:44.812342    4983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:44.820763    4983 main.go:141] libmachine: STDOUT: 
	I0610 10:18:44.820779    4983 main.go:141] libmachine: STDERR: 
	I0610 10:18:44.820834    4983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2 +20000M
	I0610 10:18:44.828030    4983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:44.828041    4983 main.go:141] libmachine: STDERR: 
	I0610 10:18:44.828055    4983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:44.828061    4983 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:44.828096    4983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:d6:dc:7d:1d:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/false-472000/disk.qcow2
	I0610 10:18:44.829519    4983 main.go:141] libmachine: STDOUT: 
	I0610 10:18:44.829532    4983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:44.829546    4983 client.go:171] LocalClient.Create took 204.214917ms
	I0610 10:18:46.831674    4983 start.go:128] duration metric: createHost completed in 2.264789917s
	I0610 10:18:46.831766    4983 start.go:83] releasing machines lock for "false-472000", held for 2.265269834s
	W0610 10:18:46.832312    4983 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:46.843834    4983 out.go:177] 
	W0610 10:18:46.846989    4983 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:18:46.847041    4983 out.go:239] * 
	* 
	W0610 10:18:46.850906    4983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:18:46.866911    4983 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.657057917s)

                                                
                                                
-- stdout --
	* [enable-default-cni-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-472000 in cluster enable-default-cni-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:18:49.058881    5098 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:18:49.059018    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:49.059021    5098 out.go:309] Setting ErrFile to fd 2...
	I0610 10:18:49.059023    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:18:49.059089    5098 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:18:49.060156    5098 out.go:303] Setting JSON to false
	I0610 10:18:49.075343    5098 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4700,"bootTime":1686412829,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:18:49.075636    5098 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:18:49.080568    5098 out.go:177] * [enable-default-cni-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:18:49.087517    5098 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:18:49.087537    5098 notify.go:220] Checking for updates...
	I0610 10:18:49.094510    5098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:18:49.097508    5098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:18:49.100457    5098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:18:49.103534    5098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:18:49.106558    5098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:18:49.109557    5098 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:18:49.113448    5098 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:18:49.120503    5098 start.go:297] selected driver: qemu2
	I0610 10:18:49.120513    5098 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:18:49.120527    5098 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:18:49.122506    5098 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:18:49.125458    5098 out.go:177] * Automatically selected the socket_vmnet network
	E0610 10:18:49.128597    5098 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0610 10:18:49.128613    5098 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:18:49.128631    5098 cni.go:84] Creating CNI manager for "bridge"
	I0610 10:18:49.128635    5098 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:18:49.128649    5098 start_flags.go:319] config:
	{Name:enable-default-cni-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0610 10:18:49.128739    5098 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:18:49.132529    5098 out.go:177] * Starting control plane node enable-default-cni-472000 in cluster enable-default-cni-472000
	I0610 10:18:49.139536    5098 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:18:49.139556    5098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:18:49.139567    5098 cache.go:57] Caching tarball of preloaded images
	I0610 10:18:49.139617    5098 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:18:49.139622    5098 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:18:49.139838    5098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/enable-default-cni-472000/config.json ...
	I0610 10:18:49.139850    5098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/enable-default-cni-472000/config.json: {Name:mk075389efb40393a8b2d7f7b9793c343aef1ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:18:49.140056    5098 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:18:49.140068    5098 start.go:364] acquiring machines lock for enable-default-cni-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:49.140097    5098 start.go:368] acquired machines lock for "enable-default-cni-472000" in 24µs
	I0610 10:18:49.140109    5098 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:49.140145    5098 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:49.148485    5098 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:49.165012    5098 start.go:159] libmachine.API.Create for "enable-default-cni-472000" (driver="qemu2")
	I0610 10:18:49.165036    5098 client.go:168] LocalClient.Create starting
	I0610 10:18:49.165089    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:49.165108    5098 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:49.165118    5098 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:49.165170    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:49.165185    5098 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:49.165191    5098 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:49.165518    5098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:49.283734    5098 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:49.355023    5098 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:49.355028    5098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:49.355174    5098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:49.363642    5098 main.go:141] libmachine: STDOUT: 
	I0610 10:18:49.363658    5098 main.go:141] libmachine: STDERR: 
	I0610 10:18:49.363719    5098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2 +20000M
	I0610 10:18:49.370781    5098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:49.370802    5098 main.go:141] libmachine: STDERR: 
	I0610 10:18:49.370818    5098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:49.370823    5098 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:49.370856    5098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:5b:d7:c8:33:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:49.372397    5098 main.go:141] libmachine: STDOUT: 
	I0610 10:18:49.372411    5098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:49.372430    5098 client.go:171] LocalClient.Create took 207.391208ms
	I0610 10:18:51.374562    5098 start.go:128] duration metric: createHost completed in 2.234435042s
	I0610 10:18:51.374625    5098 start.go:83] releasing machines lock for "enable-default-cni-472000", held for 2.234553167s
	W0610 10:18:51.374677    5098 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:51.386814    5098 out.go:177] * Deleting "enable-default-cni-472000" in qemu2 ...
	W0610 10:18:51.406256    5098 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:51.406282    5098 start.go:702] Will try again in 5 seconds ...
	I0610 10:18:56.408468    5098 start.go:364] acquiring machines lock for enable-default-cni-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:18:56.409008    5098 start.go:368] acquired machines lock for "enable-default-cni-472000" in 438.291µs
	I0610 10:18:56.409138    5098 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:18:56.409427    5098 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:18:56.418104    5098 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:18:56.464585    5098 start.go:159] libmachine.API.Create for "enable-default-cni-472000" (driver="qemu2")
	I0610 10:18:56.464635    5098 client.go:168] LocalClient.Create starting
	I0610 10:18:56.464758    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:18:56.464804    5098 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:56.464828    5098 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:56.464912    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:18:56.464939    5098 main.go:141] libmachine: Decoding PEM data...
	I0610 10:18:56.464950    5098 main.go:141] libmachine: Parsing certificate...
	I0610 10:18:56.465504    5098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:18:56.583933    5098 main.go:141] libmachine: Creating SSH key...
	I0610 10:18:56.631138    5098 main.go:141] libmachine: Creating Disk image...
	I0610 10:18:56.631143    5098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:18:56.631296    5098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:56.639805    5098 main.go:141] libmachine: STDOUT: 
	I0610 10:18:56.639822    5098 main.go:141] libmachine: STDERR: 
	I0610 10:18:56.639891    5098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2 +20000M
	I0610 10:18:56.646947    5098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:18:56.646960    5098 main.go:141] libmachine: STDERR: 
	I0610 10:18:56.646973    5098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:56.646981    5098 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:18:56.647028    5098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d5:20:a7:1c:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/enable-default-cni-472000/disk.qcow2
	I0610 10:18:56.648597    5098 main.go:141] libmachine: STDOUT: 
	I0610 10:18:56.648611    5098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:18:56.648621    5098 client.go:171] LocalClient.Create took 183.98125ms
	I0610 10:18:58.650782    5098 start.go:128] duration metric: createHost completed in 2.241331542s
	I0610 10:18:58.650901    5098 start.go:83] releasing machines lock for "enable-default-cni-472000", held for 2.24189s
	W0610 10:18:58.651405    5098 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:18:58.659860    5098 out.go:177] 
	W0610 10:18:58.663973    5098 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:18:58.664067    5098 out.go:239] * 
	* 
	W0610 10:18:58.666804    5098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:18:58.674963    5098 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.715614667s)

                                                
                                                
-- stdout --
	* [flannel-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-472000 in cluster flannel-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:00.874437    5210 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:00.874567    5210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:00.874570    5210 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:00.874573    5210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:00.874643    5210 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:00.875666    5210 out.go:303] Setting JSON to false
	I0610 10:19:00.890774    5210 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4711,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:00.890841    5210 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:00.899086    5210 out.go:177] * [flannel-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:00.903115    5210 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:00.903170    5210 notify.go:220] Checking for updates...
	I0610 10:19:00.910048    5210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:00.913073    5210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:00.916077    5210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:00.919071    5210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:00.922102    5210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:00.925254    5210 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:00.929009    5210 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:00.936107    5210 start.go:297] selected driver: qemu2
	I0610 10:19:00.936113    5210 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:00.936121    5210 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:00.937969    5210 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:00.941083    5210 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:00.944188    5210 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:00.944211    5210 cni.go:84] Creating CNI manager for "flannel"
	I0610 10:19:00.944216    5210 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0610 10:19:00.944222    5210 start_flags.go:319] config:
	{Name:flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:00.944312    5210 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:00.952023    5210 out.go:177] * Starting control plane node flannel-472000 in cluster flannel-472000
	I0610 10:19:00.956128    5210 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:00.956150    5210 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:00.956167    5210 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:00.956249    5210 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:00.956254    5210 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:00.956498    5210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/flannel-472000/config.json ...
	I0610 10:19:00.956511    5210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/flannel-472000/config.json: {Name:mkf3e9fb97119d5562ea568a9240c1a103ebb0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:00.956724    5210 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:00.956741    5210 start.go:364] acquiring machines lock for flannel-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:00.956771    5210 start.go:368] acquired machines lock for "flannel-472000" in 25.666µs
	I0610 10:19:00.956784    5210 start.go:93] Provisioning new machine with config: &{Name:flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:00.956815    5210 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:00.965088    5210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:00.981996    5210 start.go:159] libmachine.API.Create for "flannel-472000" (driver="qemu2")
	I0610 10:19:00.982018    5210 client.go:168] LocalClient.Create starting
	I0610 10:19:00.982081    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:00.982104    5210 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:00.982115    5210 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:00.982156    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:00.982172    5210 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:00.982180    5210 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:00.982512    5210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:01.092486    5210 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:01.208820    5210 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:01.208827    5210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:01.208986    5210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:01.217627    5210 main.go:141] libmachine: STDOUT: 
	I0610 10:19:01.217642    5210 main.go:141] libmachine: STDERR: 
	I0610 10:19:01.217704    5210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2 +20000M
	I0610 10:19:01.224785    5210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:01.224807    5210 main.go:141] libmachine: STDERR: 
	I0610 10:19:01.224829    5210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:01.224834    5210 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:01.224872    5210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:da:62:2f:f1:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:01.226393    5210 main.go:141] libmachine: STDOUT: 
	I0610 10:19:01.226405    5210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:01.226422    5210 client.go:171] LocalClient.Create took 244.402458ms
	I0610 10:19:03.228567    5210 start.go:128] duration metric: createHost completed in 2.271764375s
	I0610 10:19:03.228642    5210 start.go:83] releasing machines lock for "flannel-472000", held for 2.271894958s
	W0610 10:19:03.228743    5210 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:03.246718    5210 out.go:177] * Deleting "flannel-472000" in qemu2 ...
	W0610 10:19:03.262643    5210 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:03.262670    5210 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:08.264853    5210 start.go:364] acquiring machines lock for flannel-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:08.265457    5210 start.go:368] acquired machines lock for "flannel-472000" in 467.625µs
	I0610 10:19:08.265597    5210 start.go:93] Provisioning new machine with config: &{Name:flannel-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:08.265909    5210 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:08.274317    5210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:08.316762    5210 start.go:159] libmachine.API.Create for "flannel-472000" (driver="qemu2")
	I0610 10:19:08.316802    5210 client.go:168] LocalClient.Create starting
	I0610 10:19:08.316927    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:08.316963    5210 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:08.316982    5210 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:08.317065    5210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:08.317101    5210 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:08.317118    5210 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:08.317605    5210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:08.436092    5210 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:08.507007    5210 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:08.507012    5210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:08.507177    5210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:08.516108    5210 main.go:141] libmachine: STDOUT: 
	I0610 10:19:08.516119    5210 main.go:141] libmachine: STDERR: 
	I0610 10:19:08.516170    5210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2 +20000M
	I0610 10:19:08.523251    5210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:08.523266    5210 main.go:141] libmachine: STDERR: 
	I0610 10:19:08.523277    5210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:08.523293    5210 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:08.523341    5210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1b:bb:ad:cf:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/flannel-472000/disk.qcow2
	I0610 10:19:08.524856    5210 main.go:141] libmachine: STDOUT: 
	I0610 10:19:08.524871    5210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:08.524882    5210 client.go:171] LocalClient.Create took 208.078542ms
	I0610 10:19:10.527012    5210 start.go:128] duration metric: createHost completed in 2.261115s
	I0610 10:19:10.527074    5210 start.go:83] releasing machines lock for "flannel-472000", held for 2.261627708s
	W0610 10:19:10.527575    5210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:10.534182    5210 out.go:177] 
	W0610 10:19:10.538046    5210 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:10.538083    5210 out.go:239] * 
	* 
	W0610 10:19:10.540652    5210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:10.552133    5210 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.703721083s)

                                                
                                                
-- stdout --
	* [bridge-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-472000 in cluster bridge-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:02.253565    5233 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:02.253700    5233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:02.253703    5233 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:02.253706    5233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:02.253789    5233 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:02.254762    5233 out.go:303] Setting JSON to false
	I0610 10:19:02.269701    5233 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4713,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:02.269755    5233 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:02.274856    5233 out.go:177] * [bridge-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:02.280756    5233 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:02.284795    5233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:02.280830    5233 notify.go:220] Checking for updates...
	I0610 10:19:02.290771    5233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:02.293809    5233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:02.296854    5233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:02.298266    5233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:02.302086    5233 config.go:182] Loaded profile config "flannel-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:02.302132    5233 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:02.306854    5233 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:02.317807    5233 start.go:297] selected driver: qemu2
	I0610 10:19:02.317812    5233 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:02.317822    5233 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:02.319673    5233 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:02.322804    5233 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:02.324238    5233 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:02.324262    5233 cni.go:84] Creating CNI manager for "bridge"
	I0610 10:19:02.324266    5233 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:19:02.324282    5233 start_flags.go:319] config:
	{Name:bridge-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:bridge-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:02.324370    5233 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:02.331834    5233 out.go:177] * Starting control plane node bridge-472000 in cluster bridge-472000
	I0610 10:19:02.335801    5233 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:02.335817    5233 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:02.335831    5233 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:02.335920    5233 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:02.335934    5233 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:02.336009    5233 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/bridge-472000/config.json ...
	I0610 10:19:02.336025    5233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/bridge-472000/config.json: {Name:mk988f468790aa889fd0f42add2357bd55048fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:02.336232    5233 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:02.336243    5233 start.go:364] acquiring machines lock for bridge-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:03.228751    5233 start.go:368] acquired machines lock for "bridge-472000" in 892.4895ms
	I0610 10:19:03.228931    5233 start.go:93] Provisioning new machine with config: &{Name:bridge-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:03.229228    5233 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:03.238731    5233 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:03.284382    5233 start.go:159] libmachine.API.Create for "bridge-472000" (driver="qemu2")
	I0610 10:19:03.284431    5233 client.go:168] LocalClient.Create starting
	I0610 10:19:03.284547    5233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:03.284582    5233 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:03.284597    5233 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:03.284677    5233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:03.284705    5233 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:03.284715    5233 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:03.285328    5233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:03.434755    5233 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:03.488409    5233 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:03.488417    5233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:03.488558    5233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:03.497175    5233 main.go:141] libmachine: STDOUT: 
	I0610 10:19:03.497186    5233 main.go:141] libmachine: STDERR: 
	I0610 10:19:03.497231    5233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2 +20000M
	I0610 10:19:03.504285    5233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:03.504310    5233 main.go:141] libmachine: STDERR: 
	I0610 10:19:03.504335    5233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:03.504344    5233 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:03.504392    5233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c0:89:84:da:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:03.505914    5233 main.go:141] libmachine: STDOUT: 
	I0610 10:19:03.505925    5233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:03.505946    5233 client.go:171] LocalClient.Create took 221.511958ms
	I0610 10:19:05.508071    5233 start.go:128] duration metric: createHost completed in 2.278854542s
	I0610 10:19:05.508141    5233 start.go:83] releasing machines lock for "bridge-472000", held for 2.279391208s
	W0610 10:19:05.508240    5233 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:05.518381    5233 out.go:177] * Deleting "bridge-472000" in qemu2 ...
	W0610 10:19:05.538547    5233 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:05.538569    5233 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:10.540728    5233 start.go:364] acquiring machines lock for bridge-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:10.541067    5233 start.go:368] acquired machines lock for "bridge-472000" in 275.125µs
	I0610 10:19:10.541206    5233 start.go:93] Provisioning new machine with config: &{Name:bridge-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:10.541533    5233 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:10.552126    5233 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:10.600551    5233 start.go:159] libmachine.API.Create for "bridge-472000" (driver="qemu2")
	I0610 10:19:10.600634    5233 client.go:168] LocalClient.Create starting
	I0610 10:19:10.600750    5233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:10.600791    5233 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:10.600816    5233 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:10.600896    5233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:10.600928    5233 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:10.600945    5233 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:10.601477    5233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:10.720296    5233 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:10.870131    5233 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:10.870144    5233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:10.870311    5233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:10.879423    5233 main.go:141] libmachine: STDOUT: 
	I0610 10:19:10.879444    5233 main.go:141] libmachine: STDERR: 
	I0610 10:19:10.879508    5233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2 +20000M
	I0610 10:19:10.887483    5233 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:10.887499    5233 main.go:141] libmachine: STDERR: 
	I0610 10:19:10.887515    5233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:10.887524    5233 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:10.887580    5233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a7:cf:51:df:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/bridge-472000/disk.qcow2
	I0610 10:19:10.889261    5233 main.go:141] libmachine: STDOUT: 
	I0610 10:19:10.889275    5233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:10.889289    5233 client.go:171] LocalClient.Create took 288.654166ms
	I0610 10:19:12.891364    5233 start.go:128] duration metric: createHost completed in 2.34983625s
	I0610 10:19:12.891389    5233 start.go:83] releasing machines lock for "bridge-472000", held for 2.350343459s
	W0610 10:19:12.891511    5233 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:12.899740    5233 out.go:177] 
	W0610 10:19:12.903860    5233 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:12.903874    5233 out.go:239] * 
	* 
	W0610 10:19:12.904354    5233 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:12.914896    5233 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-472000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.73915575s)

                                                
                                                
-- stdout --
	* [kubenet-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-472000 in cluster kubenet-472000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:12.892121    5356 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:12.896344    5356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:12.896348    5356 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:12.896351    5356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:12.896431    5356 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:12.900081    5356 out.go:303] Setting JSON to false
	I0610 10:19:12.915514    5356 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4723,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:12.915569    5356 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:12.926849    5356 out.go:177] * [kubenet-472000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:12.938830    5356 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:12.935015    5356 notify.go:220] Checking for updates...
	I0610 10:19:12.945888    5356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:12.953880    5356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:12.956876    5356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:12.959834    5356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:12.962923    5356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:12.966124    5356 config.go:182] Loaded profile config "bridge-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:12.966172    5356 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:12.970839    5356 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:12.977773    5356 start.go:297] selected driver: qemu2
	I0610 10:19:12.977780    5356 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:12.977790    5356 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:12.979734    5356 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:12.983878    5356 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:12.987794    5356 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:12.987816    5356 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0610 10:19:12.987821    5356 start_flags.go:319] config:
	{Name:kubenet-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:12.987942    5356 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:12.991823    5356 out.go:177] * Starting control plane node kubenet-472000 in cluster kubenet-472000
	I0610 10:19:12.998825    5356 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:12.998864    5356 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:12.998877    5356 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:12.998972    5356 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:12.998984    5356 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:12.999049    5356 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubenet-472000/config.json ...
	I0610 10:19:12.999062    5356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/kubenet-472000/config.json: {Name:mk35337d2adcaadb3d51e9a7f26d1cb38f7f86c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:12.999257    5356 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:12.999272    5356 start.go:364] acquiring machines lock for kubenet-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:12.999300    5356 start.go:368] acquired machines lock for "kubenet-472000" in 22.958µs
	I0610 10:19:12.999311    5356 start.go:93] Provisioning new machine with config: &{Name:kubenet-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:12.999355    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:13.006821    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:13.021504    5356 start.go:159] libmachine.API.Create for "kubenet-472000" (driver="qemu2")
	I0610 10:19:13.021538    5356 client.go:168] LocalClient.Create starting
	I0610 10:19:13.021611    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:13.021637    5356 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:13.021651    5356 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:13.021698    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:13.021712    5356 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:13.021719    5356 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:13.022084    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:13.135842    5356 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:13.250071    5356 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:13.250078    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:13.250230    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:13.263443    5356 main.go:141] libmachine: STDOUT: 
	I0610 10:19:13.263458    5356 main.go:141] libmachine: STDERR: 
	I0610 10:19:13.263511    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2 +20000M
	I0610 10:19:13.271517    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:13.271547    5356 main.go:141] libmachine: STDERR: 
	I0610 10:19:13.271572    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:13.271579    5356 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:13.271611    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:5b:51:53:57:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:13.273549    5356 main.go:141] libmachine: STDOUT: 
	I0610 10:19:13.273569    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:13.273585    5356 client.go:171] LocalClient.Create took 252.045333ms
	I0610 10:19:15.275825    5356 start.go:128] duration metric: createHost completed in 2.276447917s
	I0610 10:19:15.275883    5356 start.go:83] releasing machines lock for "kubenet-472000", held for 2.276610125s
	W0610 10:19:15.275930    5356 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:15.292848    5356 out.go:177] * Deleting "kubenet-472000" in qemu2 ...
	W0610 10:19:15.307884    5356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:15.307911    5356 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:20.310122    5356 start.go:364] acquiring machines lock for kubenet-472000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:20.310603    5356 start.go:368] acquired machines lock for "kubenet-472000" in 391.792µs
	I0610 10:19:20.310757    5356 start.go:93] Provisioning new machine with config: &{Name:kubenet-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-472000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:20.311042    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:20.319812    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 10:19:20.368064    5356 start.go:159] libmachine.API.Create for "kubenet-472000" (driver="qemu2")
	I0610 10:19:20.368121    5356 client.go:168] LocalClient.Create starting
	I0610 10:19:20.368223    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:20.368266    5356 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:20.368283    5356 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:20.368356    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:20.368383    5356 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:20.368396    5356 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:20.369011    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:20.489971    5356 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:20.546149    5356 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:20.546154    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:20.546329    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:20.554810    5356 main.go:141] libmachine: STDOUT: 
	I0610 10:19:20.554822    5356 main.go:141] libmachine: STDERR: 
	I0610 10:19:20.554884    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2 +20000M
	I0610 10:19:20.561993    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:20.562009    5356 main.go:141] libmachine: STDERR: 
	I0610 10:19:20.562031    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:20.562037    5356 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:20.562077    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9c:05:8c:8d:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/kubenet-472000/disk.qcow2
	I0610 10:19:20.563629    5356 main.go:141] libmachine: STDOUT: 
	I0610 10:19:20.563657    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:20.563672    5356 client.go:171] LocalClient.Create took 195.549458ms
	I0610 10:19:22.565817    5356 start.go:128] duration metric: createHost completed in 2.254774708s
	I0610 10:19:22.565873    5356 start.go:83] releasing machines lock for "kubenet-472000", held for 2.2552835s
	W0610 10:19:22.566253    5356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:22.575866    5356 out.go:177] 
	W0610 10:19:22.579950    5356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:22.579978    5356 out.go:239] * 
	* 
	W0610 10:19:22.582664    5356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:22.590935    5356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.9342945s)

                                                
                                                
-- stdout --
	* [old-k8s-version-737000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-737000 in cluster old-k8s-version-737000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-737000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:15.027017    5461 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:15.027149    5461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:15.027152    5461 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:15.027154    5461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:15.027221    5461 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:15.028247    5461 out.go:303] Setting JSON to false
	I0610 10:19:15.043352    5461 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4726,"bootTime":1686412829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:15.043427    5461 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:15.048039    5461 out.go:177] * [old-k8s-version-737000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:15.056072    5461 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:15.060002    5461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:15.056158    5461 notify.go:220] Checking for updates...
	I0610 10:19:15.066007    5461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:15.069035    5461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:15.072020    5461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:15.075086    5461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:15.078357    5461 config.go:182] Loaded profile config "kubenet-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:15.078409    5461 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:15.083006    5461 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:15.089947    5461 start.go:297] selected driver: qemu2
	I0610 10:19:15.089952    5461 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:15.089960    5461 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:15.091825    5461 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:15.094950    5461 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:15.098115    5461 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:15.098135    5461 cni.go:84] Creating CNI manager for ""
	I0610 10:19:15.098142    5461 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 10:19:15.098148    5461 start_flags.go:319] config:
	{Name:old-k8s-version-737000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:15.098228    5461 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:15.104964    5461 out.go:177] * Starting control plane node old-k8s-version-737000 in cluster old-k8s-version-737000
	I0610 10:19:15.108879    5461 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 10:19:15.108908    5461 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 10:19:15.108926    5461 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:15.108996    5461 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:15.109002    5461 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 10:19:15.109069    5461 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/old-k8s-version-737000/config.json ...
	I0610 10:19:15.109083    5461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/old-k8s-version-737000/config.json: {Name:mk8dc7cab8384b01f2fa5372fb43a84434b46dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:15.109269    5461 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:15.109283    5461 start.go:364] acquiring machines lock for old-k8s-version-737000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:15.276008    5461 start.go:368] acquired machines lock for "old-k8s-version-737000" in 166.687833ms
	I0610 10:19:15.276095    5461 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:15.276302    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:15.285797    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:15.330828    5461 start.go:159] libmachine.API.Create for "old-k8s-version-737000" (driver="qemu2")
	I0610 10:19:15.330875    5461 client.go:168] LocalClient.Create starting
	I0610 10:19:15.330991    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:15.331031    5461 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:15.331055    5461 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:15.331140    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:15.331168    5461 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:15.331186    5461 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:15.331841    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:15.450956    5461 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:15.618703    5461 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:15.618710    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:15.618867    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:15.628042    5461 main.go:141] libmachine: STDOUT: 
	I0610 10:19:15.628059    5461 main.go:141] libmachine: STDERR: 
	I0610 10:19:15.628109    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2 +20000M
	I0610 10:19:15.635202    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:15.635214    5461 main.go:141] libmachine: STDERR: 
	I0610 10:19:15.635232    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:15.635240    5461 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:15.635276    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:1d:10:7a:9e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:15.636722    5461 main.go:141] libmachine: STDOUT: 
	I0610 10:19:15.636734    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:15.636752    5461 client.go:171] LocalClient.Create took 305.875708ms
	I0610 10:19:17.638921    5461 start.go:128] duration metric: createHost completed in 2.362628375s
	I0610 10:19:17.639022    5461 start.go:83] releasing machines lock for "old-k8s-version-737000", held for 2.363014292s
	W0610 10:19:17.639112    5461 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:17.649419    5461 out.go:177] * Deleting "old-k8s-version-737000" in qemu2 ...
	W0610 10:19:17.668847    5461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:17.668875    5461 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:22.670890    5461 start.go:364] acquiring machines lock for old-k8s-version-737000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:22.670973    5461 start.go:368] acquired machines lock for "old-k8s-version-737000" in 65.5µs
	I0610 10:19:22.671014    5461 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:22.671071    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:22.679256    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:22.694882    5461 start.go:159] libmachine.API.Create for "old-k8s-version-737000" (driver="qemu2")
	I0610 10:19:22.694902    5461 client.go:168] LocalClient.Create starting
	I0610 10:19:22.694960    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:22.694987    5461 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:22.694997    5461 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:22.695032    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:22.695046    5461 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:22.695054    5461 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:22.695326    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:22.807215    5461 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:22.883384    5461 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:22.883394    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:22.883565    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:22.892961    5461 main.go:141] libmachine: STDOUT: 
	I0610 10:19:22.892982    5461 main.go:141] libmachine: STDERR: 
	I0610 10:19:22.893054    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2 +20000M
	I0610 10:19:22.901216    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:22.901238    5461 main.go:141] libmachine: STDERR: 
	I0610 10:19:22.901252    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:22.901256    5461 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:22.901307    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b1:25:9b:61:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:22.903033    5461 main.go:141] libmachine: STDOUT: 
	I0610 10:19:22.903046    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:22.903059    5461 client.go:171] LocalClient.Create took 208.157083ms
	I0610 10:19:24.903707    5461 start.go:128] duration metric: createHost completed in 2.232662875s
	I0610 10:19:24.903728    5461 start.go:83] releasing machines lock for "old-k8s-version-737000", held for 2.232783917s
	W0610 10:19:24.903825    5461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-737000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-737000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:24.914104    5461 out.go:177] 
	W0610 10:19:24.917740    5461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:24.917751    5461 out.go:239] * 
	* 
	W0610 10:19:24.918264    5461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:24.928748    5461 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (35.531333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.817762667s)

                                                
                                                
-- stdout --
	* [no-preload-133000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-133000 in cluster no-preload-133000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-133000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:24.749268    5576 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:24.749409    5576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:24.749412    5576 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:24.749414    5576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:24.749477    5576 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:24.750474    5576 out.go:303] Setting JSON to false
	I0610 10:19:24.765781    5576 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4735,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:24.765848    5576 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:24.771431    5576 out.go:177] * [no-preload-133000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:24.778365    5576 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:24.782379    5576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:24.778419    5576 notify.go:220] Checking for updates...
	I0610 10:19:24.788311    5576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:24.791363    5576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:24.794363    5576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:24.797335    5576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:24.800739    5576 config.go:182] Loaded profile config "old-k8s-version-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 10:19:24.800780    5576 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:24.805342    5576 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:24.812350    5576 start.go:297] selected driver: qemu2
	I0610 10:19:24.812355    5576 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:24.812373    5576 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:24.814247    5576 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:24.817367    5576 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:24.818851    5576 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:24.818868    5576 cni.go:84] Creating CNI manager for ""
	I0610 10:19:24.818875    5576 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:24.818880    5576 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:19:24.818887    5576 start_flags.go:319] config:
	{Name:no-preload-133000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-133000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:24.818997    5576 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.822391    5576 out.go:177] * Starting control plane node no-preload-133000 in cluster no-preload-133000
	I0610 10:19:24.830285    5576 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:24.830375    5576 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/no-preload-133000/config.json ...
	I0610 10:19:24.830396    5576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/no-preload-133000/config.json: {Name:mk47a71257d7b4e107db1acd477e61d5c4ab921f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:24.830405    5576 cache.go:107] acquiring lock: {Name:mk5e9db964749ce1875223013d924a379c2d67b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830420    5576 cache.go:107] acquiring lock: {Name:mkc01997cf65ddb51907f78f87b88b37c8cdfd8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830438    5576 cache.go:107] acquiring lock: {Name:mkab865ac59e85c9f84e7b2c554599e758e63d91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830483    5576 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 10:19:24.830488    5576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.625µs
	I0610 10:19:24.830495    5576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 10:19:24.830506    5576 cache.go:107] acquiring lock: {Name:mk9ff7ce12bb1c825de788d9cced35c79cbb54cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830566    5576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.2
	I0610 10:19:24.830596    5576 cache.go:107] acquiring lock: {Name:mkdda32bfd267c7b5db814dee3143ae55170e33d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830556    5576 cache.go:107] acquiring lock: {Name:mk0cbb26c7d36b795ee41613762eb20bfc07e3db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830626    5576 cache.go:107] acquiring lock: {Name:mk6d706144ab230be4eae7ef0166d1a2f40bcfe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830714    5576 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:24.830709    5576 cache.go:107] acquiring lock: {Name:mk2e3916ec26ba7dcc9bbe17432dc776f0e32133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:24.830728    5576 start.go:364] acquiring machines lock for no-preload-133000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:24.831001    5576 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0610 10:19:24.831136    5576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 10:19:24.831159    5576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0610 10:19:24.831209    5576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.2
	I0610 10:19:24.831208    5576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.2
	I0610 10:19:24.831279    5576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0610 10:19:24.838580    5576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.2
	I0610 10:19:24.838607    5576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 10:19:24.838658    5576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.2
	I0610 10:19:24.838695    5576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0610 10:19:24.839715    5576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.2
	I0610 10:19:24.840751    5576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0610 10:19:24.840835    5576 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0610 10:19:24.903768    5576 start.go:368] acquired machines lock for "no-preload-133000" in 73.033792ms
	I0610 10:19:24.903788    5576 start.go:93] Provisioning new machine with config: &{Name:no-preload-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-133000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:24.903839    5576 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:24.912724    5576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:24.927028    5576 start.go:159] libmachine.API.Create for "no-preload-133000" (driver="qemu2")
	I0610 10:19:24.927064    5576 client.go:168] LocalClient.Create starting
	I0610 10:19:24.927132    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:24.927158    5576 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:24.927169    5576 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:24.927217    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:24.927240    5576 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:24.927248    5576 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:24.930518    5576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:25.053440    5576 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:25.108187    5576 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:25.108203    5576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:25.108340    5576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:25.118272    5576 main.go:141] libmachine: STDOUT: 
	I0610 10:19:25.118296    5576 main.go:141] libmachine: STDERR: 
	I0610 10:19:25.118367    5576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2 +20000M
	I0610 10:19:25.126454    5576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:25.126472    5576 main.go:141] libmachine: STDERR: 
	I0610 10:19:25.126491    5576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:25.126496    5576 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:25.126536    5576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:47:64:5e:cc:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:25.128381    5576 main.go:141] libmachine: STDOUT: 
	I0610 10:19:25.128396    5576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:25.128422    5576 client.go:171] LocalClient.Create took 201.355375ms
	I0610 10:19:26.039291    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2
	I0610 10:19:26.061264    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2
	I0610 10:19:26.119231    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2
	I0610 10:19:26.216880    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0610 10:19:26.360018    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2
	I0610 10:19:26.608698    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0610 10:19:26.705583    5576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0610 10:19:26.880760    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 10:19:26.880821    5576 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.050342041s
	I0610 10:19:26.880853    5576 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 10:19:27.128663    5576 start.go:128] duration metric: createHost completed in 2.224823208s
	I0610 10:19:27.128712    5576 start.go:83] releasing machines lock for "no-preload-133000", held for 2.224963334s
	W0610 10:19:27.128771    5576 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:27.145409    5576 out.go:177] * Deleting "no-preload-133000" in qemu2 ...
	W0610 10:19:27.166417    5576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:27.166455    5576 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:28.320144    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0610 10:19:28.320199    5576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 3.489653416s
	I0610 10:19:28.320226    5576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0610 10:19:28.340942    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0610 10:19:28.340977    5576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.510388416s
	I0610 10:19:28.341012    5576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0610 10:19:29.756834    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0610 10:19:29.756907    5576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 4.926555708s
	I0610 10:19:29.756936    5576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0610 10:19:30.293891    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0610 10:19:30.293939    5576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 5.463496708s
	I0610 10:19:30.293975    5576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0610 10:19:31.273573    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0610 10:19:31.273615    5576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 6.44330225s
	I0610 10:19:31.273641    5576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0610 10:19:32.166630    5576 start.go:364] acquiring machines lock for no-preload-133000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:32.183429    5576 start.go:368] acquired machines lock for "no-preload-133000" in 16.746584ms
	I0610 10:19:32.183484    5576 start.go:93] Provisioning new machine with config: &{Name:no-preload-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-133000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:32.183733    5576 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:32.196005    5576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:32.241316    5576 start.go:159] libmachine.API.Create for "no-preload-133000" (driver="qemu2")
	I0610 10:19:32.241355    5576 client.go:168] LocalClient.Create starting
	I0610 10:19:32.241474    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:32.241510    5576 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:32.241528    5576 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:32.241594    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:32.241622    5576 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:32.241647    5576 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:32.242119    5576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:32.363843    5576 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:32.481404    5576 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:32.481411    5576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:32.485081    5576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:32.494172    5576 main.go:141] libmachine: STDOUT: 
	I0610 10:19:32.494191    5576 main.go:141] libmachine: STDERR: 
	I0610 10:19:32.494272    5576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2 +20000M
	I0610 10:19:32.502324    5576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:32.502342    5576 main.go:141] libmachine: STDERR: 
	I0610 10:19:32.502360    5576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:32.502366    5576 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:32.502417    5576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:21:03:35:e7:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:32.504364    5576 main.go:141] libmachine: STDOUT: 
	I0610 10:19:32.504380    5576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:32.504392    5576 client.go:171] LocalClient.Create took 263.036292ms
	I0610 10:19:33.376174    5576 cache.go:157] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0610 10:19:33.376254    5576 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 8.545772708s
	I0610 10:19:33.376300    5576 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0610 10:19:33.376356    5576 cache.go:87] Successfully saved all images to host disk.
	I0610 10:19:34.506521    5576 start.go:128] duration metric: createHost completed in 2.322792209s
	I0610 10:19:34.506604    5576 start.go:83] releasing machines lock for "no-preload-133000", held for 2.32318475s
	W0610 10:19:34.506952    5576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-133000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-133000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:34.518394    5576 out.go:177] 
	W0610 10:19:34.521503    5576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:34.521526    5576 out.go:239] * 
	* 
	W0610 10:19:34.523509    5576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:34.531472    5576 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (50.168458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-737000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-737000 create -f testdata/busybox.yaml: exit status 1 (31.721375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-737000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-737000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (34.073542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (31.99ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-737000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-737000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-737000 describe deploy/metrics-server -n kube-system: exit status 1 (27.511208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-737000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-737000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (29.197875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.883328333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-737000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-737000 in cluster old-k8s-version-737000
	* Restarting existing qemu2 VM for "old-k8s-version-737000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-737000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:25.365055    5640 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:25.365193    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:25.365196    5640 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:25.365198    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:25.365285    5640 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:25.366704    5640 out.go:303] Setting JSON to false
	I0610 10:19:25.383247    5640 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4736,"bootTime":1686412829,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:25.383334    5640 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:25.387360    5640 out.go:177] * [old-k8s-version-737000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:25.394247    5640 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:25.397281    5640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:25.394260    5640 notify.go:220] Checking for updates...
	I0610 10:19:25.403266    5640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:25.406341    5640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:25.409263    5640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:25.412302    5640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:25.415584    5640 config.go:182] Loaded profile config "old-k8s-version-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 10:19:25.416902    5640 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0610 10:19:25.420230    5640 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:25.424286    5640 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:19:25.429282    5640 start.go:297] selected driver: qemu2
	I0610 10:19:25.429287    5640 start.go:875] validating driver "qemu2" against &{Name:old-k8s-version-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-737000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:25.429338    5640 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:25.431186    5640 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:25.431209    5640 cni.go:84] Creating CNI manager for ""
	I0610 10:19:25.431215    5640 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 10:19:25.431219    5640 start_flags.go:319] config:
	{Name:old-k8s-version-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-737000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:25.431289    5640 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:25.438246    5640 out.go:177] * Starting control plane node old-k8s-version-737000 in cluster old-k8s-version-737000
	I0610 10:19:25.442226    5640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 10:19:25.442247    5640 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 10:19:25.442257    5640 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:25.442310    5640 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:25.442316    5640 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 10:19:25.442369    5640 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/old-k8s-version-737000/config.json ...
	I0610 10:19:25.442675    5640 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:25.442685    5640 start.go:364] acquiring machines lock for old-k8s-version-737000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:27.128912    5640 start.go:368] acquired machines lock for "old-k8s-version-737000" in 1.686225s
	I0610 10:19:27.129005    5640 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:27.129036    5640 fix.go:55] fixHost starting: 
	I0610 10:19:27.129753    5640 fix.go:103] recreateIfNeeded on old-k8s-version-737000: state=Stopped err=<nil>
	W0610 10:19:27.129791    5640 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:27.142078    5640 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-737000" ...
	I0610 10:19:27.148620    5640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b1:25:9b:61:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:27.158412    5640 main.go:141] libmachine: STDOUT: 
	I0610 10:19:27.158471    5640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:27.158596    5640 fix.go:57] fixHost completed within 29.563042ms
	I0610 10:19:27.158616    5640 start.go:83] releasing machines lock for "old-k8s-version-737000", held for 29.673ms
	W0610 10:19:27.158656    5640 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:27.158840    5640 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:27.158855    5640 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:32.160969    5640 start.go:364] acquiring machines lock for old-k8s-version-737000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:32.161407    5640 start.go:368] acquired machines lock for "old-k8s-version-737000" in 334.875µs
	I0610 10:19:32.161563    5640 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:32.161582    5640 fix.go:55] fixHost starting: 
	I0610 10:19:32.162344    5640 fix.go:103] recreateIfNeeded on old-k8s-version-737000: state=Stopped err=<nil>
	W0610 10:19:32.162370    5640 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:32.168078    5640 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-737000" ...
	I0610 10:19:32.174151    5640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b1:25:9b:61:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/old-k8s-version-737000/disk.qcow2
	I0610 10:19:32.183147    5640 main.go:141] libmachine: STDOUT: 
	I0610 10:19:32.183204    5640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:32.183301    5640 fix.go:57] fixHost completed within 21.719083ms
	I0610 10:19:32.183324    5640 start.go:83] releasing machines lock for "old-k8s-version-737000", held for 21.894ms
	W0610 10:19:32.183587    5640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-737000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-737000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:32.198994    5640 out.go:177] 
	W0610 10:19:32.203109    5640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:32.203157    5640 out.go:239] * 
	* 
	W0610 10:19:32.205464    5640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:32.212968    5640 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-737000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (51.51875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-737000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (35.473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-737000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-737000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-737000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.545334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-737000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-737000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (30.886459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-737000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-737000 "sudo crictl images -o json": exit status 89 (38.94325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-737000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-737000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-737000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (28.787458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-737000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-737000 --alsologtostderr -v=1: exit status 89 (44.859792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-737000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:32.468258    5727 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:32.468831    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:32.468836    5727 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:32.468838    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:32.468942    5727 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:32.469252    5727 out.go:303] Setting JSON to false
	I0610 10:19:32.469268    5727 mustload.go:65] Loading cluster: old-k8s-version-737000
	I0610 10:19:32.469617    5727 config.go:182] Loaded profile config "old-k8s-version-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 10:19:32.472988    5727 out.go:177] * The control plane node must be running for this command
	I0610 10:19:32.481001    5727 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-737000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-737000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (27.715417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (28.292958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-737000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.348493167s)

                                                
                                                
-- stdout --
	* [embed-certs-315000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-315000 in cluster embed-certs-315000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:32.934714    5753 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:32.934819    5753 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:32.934821    5753 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:32.934824    5753 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:32.934889    5753 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:32.935906    5753 out.go:303] Setting JSON to false
	I0610 10:19:32.951237    5753 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4743,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:32.951298    5753 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:32.960499    5753 out.go:177] * [embed-certs-315000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:32.968389    5753 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:32.964539    5753 notify.go:220] Checking for updates...
	I0610 10:19:32.975504    5753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:32.978458    5753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:32.981536    5753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:32.988404    5753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:32.992588    5753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:32.995795    5753 config.go:182] Loaded profile config "no-preload-133000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:32.995829    5753 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:32.998479    5753 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:33.005577    5753 start.go:297] selected driver: qemu2
	I0610 10:19:33.005582    5753 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:33.005595    5753 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:33.007409    5753 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:33.011484    5753 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:33.015563    5753 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:33.015579    5753 cni.go:84] Creating CNI manager for ""
	I0610 10:19:33.015585    5753 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:33.015590    5753 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:19:33.015596    5753 start_flags.go:319] config:
	{Name:embed-certs-315000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:33.015665    5753 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:33.023572    5753 out.go:177] * Starting control plane node embed-certs-315000 in cluster embed-certs-315000
	I0610 10:19:33.027541    5753 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:33.027568    5753 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:33.027579    5753 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:33.027634    5753 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:33.027639    5753 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:33.027702    5753 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/embed-certs-315000/config.json ...
	I0610 10:19:33.027716    5753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/embed-certs-315000/config.json: {Name:mk932386c88053e71a8a240f8da05b1146e73ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:33.027904    5753 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:33.027915    5753 start.go:364] acquiring machines lock for embed-certs-315000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:34.506748    5753 start.go:368] acquired machines lock for "embed-certs-315000" in 1.478827583s
	I0610 10:19:34.506946    5753 start.go:93] Provisioning new machine with config: &{Name:embed-certs-315000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:34.507175    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:34.515504    5753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:34.561525    5753 start.go:159] libmachine.API.Create for "embed-certs-315000" (driver="qemu2")
	I0610 10:19:34.561576    5753 client.go:168] LocalClient.Create starting
	I0610 10:19:34.561672    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:34.561715    5753 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:34.561735    5753 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:34.561812    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:34.561839    5753 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:34.561849    5753 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:34.562436    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:34.686484    5753 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:34.858313    5753 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:34.858324    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:34.858460    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:34.869742    5753 main.go:141] libmachine: STDOUT: 
	I0610 10:19:34.869761    5753 main.go:141] libmachine: STDERR: 
	I0610 10:19:34.869838    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2 +20000M
	I0610 10:19:34.877530    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:34.877549    5753 main.go:141] libmachine: STDERR: 
	I0610 10:19:34.877569    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:34.877583    5753 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:34.877632    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:86:46:6c:df:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:34.879436    5753 main.go:141] libmachine: STDOUT: 
	I0610 10:19:34.879453    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:34.879470    5753 client.go:171] LocalClient.Create took 317.893208ms
	I0610 10:19:36.881620    5753 start.go:128] duration metric: createHost completed in 2.374452708s
	I0610 10:19:36.881688    5753 start.go:83] releasing machines lock for "embed-certs-315000", held for 2.374914625s
	W0610 10:19:36.881754    5753 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:36.900295    5753 out.go:177] * Deleting "embed-certs-315000" in qemu2 ...
	W0610 10:19:36.921570    5753 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:36.921608    5753 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:41.923695    5753 start.go:364] acquiring machines lock for embed-certs-315000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:41.935111    5753 start.go:368] acquired machines lock for "embed-certs-315000" in 11.36175ms
	I0610 10:19:41.935153    5753 start.go:93] Provisioning new machine with config: &{Name:embed-certs-315000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:41.935380    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:41.947881    5753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:41.993057    5753 start.go:159] libmachine.API.Create for "embed-certs-315000" (driver="qemu2")
	I0610 10:19:41.993103    5753 client.go:168] LocalClient.Create starting
	I0610 10:19:41.993227    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:41.993273    5753 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:41.993288    5753 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:41.993360    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:41.993387    5753 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:41.993402    5753 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:41.993874    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:42.114942    5753 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:42.197134    5753 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:42.197148    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:42.197315    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:42.206415    5753 main.go:141] libmachine: STDOUT: 
	I0610 10:19:42.206437    5753 main.go:141] libmachine: STDERR: 
	I0610 10:19:42.206509    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2 +20000M
	I0610 10:19:42.214786    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:42.214803    5753 main.go:141] libmachine: STDERR: 
	I0610 10:19:42.214820    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:42.214829    5753 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:42.214900    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:2d:00:78:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:42.216479    5753 main.go:141] libmachine: STDOUT: 
	I0610 10:19:42.216494    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:42.216507    5753 client.go:171] LocalClient.Create took 223.403292ms
	I0610 10:19:44.218840    5753 start.go:128] duration metric: createHost completed in 2.283418959s
	I0610 10:19:44.218928    5753 start.go:83] releasing machines lock for "embed-certs-315000", held for 2.283829458s
	W0610 10:19:44.219352    5753 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:44.231690    5753 out.go:177] 
	W0610 10:19:44.235989    5753 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:44.236017    5753 out.go:239] * 
	* 
	W0610 10:19:44.238414    5753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:44.245881    5753 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (50.27725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-133000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-133000 create -f testdata/busybox.yaml: exit status 1 (32.857333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-133000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-133000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (33.450667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (31.79425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-133000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-133000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-133000 describe deploy/metrics-server -n kube-system: exit status 1 (27.822291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-133000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-133000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (28.90175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (7.015117833s)

                                                
                                                
-- stdout --
	* [no-preload-133000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-133000 in cluster no-preload-133000
	* Restarting existing qemu2 VM for "no-preload-133000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-133000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:34.986392    5780 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:34.986510    5780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:34.986513    5780 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:34.986516    5780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:34.986587    5780 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:34.987493    5780 out.go:303] Setting JSON to false
	I0610 10:19:35.002532    5780 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4745,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:35.002600    5780 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:35.007136    5780 out.go:177] * [no-preload-133000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:35.013955    5780 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:35.014003    5780 notify.go:220] Checking for updates...
	I0610 10:19:35.018075    5780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:35.021149    5780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:35.022601    5780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:35.026084    5780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:35.029100    5780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:35.032432    5780 config.go:182] Loaded profile config "no-preload-133000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:35.032674    5780 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:35.037041    5780 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:19:35.044110    5780 start.go:297] selected driver: qemu2
	I0610 10:19:35.044114    5780 start.go:875] validating driver "qemu2" against &{Name:no-preload-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-133000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:35.044196    5780 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:35.046004    5780 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:35.046027    5780 cni.go:84] Creating CNI manager for ""
	I0610 10:19:35.046033    5780 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:35.046038    5780 start_flags.go:319] config:
	{Name:no-preload-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-133000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:35.046107    5780 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.053115    5780 out.go:177] * Starting control plane node no-preload-133000 in cluster no-preload-133000
	I0610 10:19:35.057021    5780 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:35.057087    5780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/no-preload-133000/config.json ...
	I0610 10:19:35.057097    5780 cache.go:107] acquiring lock: {Name:mkc01997cf65ddb51907f78f87b88b37c8cdfd8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057106    5780 cache.go:107] acquiring lock: {Name:mk5e9db964749ce1875223013d924a379c2d67b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057118    5780 cache.go:107] acquiring lock: {Name:mk0cbb26c7d36b795ee41613762eb20bfc07e3db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057166    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0610 10:19:35.057170    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 10:19:35.057172    5780 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 88.375µs
	I0610 10:19:35.057179    5780 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 73.666µs
	I0610 10:19:35.057183    5780 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 10:19:35.057183    5780 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0610 10:19:35.057192    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0610 10:19:35.057194    5780 cache.go:107] acquiring lock: {Name:mk9ff7ce12bb1c825de788d9cced35c79cbb54cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057203    5780 cache.go:107] acquiring lock: {Name:mk6d706144ab230be4eae7ef0166d1a2f40bcfe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057213    5780 cache.go:107] acquiring lock: {Name:mkdda32bfd267c7b5db814dee3143ae55170e33d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057206    5780 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 104.208µs
	I0610 10:19:35.057252    5780 cache.go:107] acquiring lock: {Name:mkab865ac59e85c9f84e7b2c554599e758e63d91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057262    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0610 10:19:35.057267    5780 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 64.5µs
	I0610 10:19:35.057271    5780 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0610 10:19:35.057273    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0610 10:19:35.057277    5780 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 65µs
	I0610 10:19:35.057281    5780 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0610 10:19:35.057245    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 10:19:35.057291    5780 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 95.791µs
	I0610 10:19:35.057295    5780 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 10:19:35.057296    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0610 10:19:35.057302    5780 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 51.416µs
	I0610 10:19:35.057307    5780 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0610 10:19:35.057287    5780 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0610 10:19:35.057328    5780 cache.go:107] acquiring lock: {Name:mk2e3916ec26ba7dcc9bbe17432dc776f0e32133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:35.057381    5780 cache.go:115] /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0610 10:19:35.057385    5780 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 107.709µs
	I0610 10:19:35.057390    5780 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0610 10:19:35.057394    5780 cache.go:87] Successfully saved all images to host disk.
	I0610 10:19:35.057399    5780 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:35.057409    5780 start.go:364] acquiring machines lock for no-preload-133000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:36.881859    5780 start.go:368] acquired machines lock for "no-preload-133000" in 1.824393333s
	I0610 10:19:36.882004    5780 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:36.882040    5780 fix.go:55] fixHost starting: 
	I0610 10:19:36.882695    5780 fix.go:103] recreateIfNeeded on no-preload-133000: state=Stopped err=<nil>
	W0610 10:19:36.882739    5780 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:36.892346    5780 out.go:177] * Restarting existing qemu2 VM for "no-preload-133000" ...
	I0610 10:19:36.903549    5780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:21:03:35:e7:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:36.913456    5780 main.go:141] libmachine: STDOUT: 
	I0610 10:19:36.913506    5780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:36.913629    5780 fix.go:57] fixHost completed within 31.595125ms
	I0610 10:19:36.913649    5780 start.go:83] releasing machines lock for "no-preload-133000", held for 31.762084ms
	W0610 10:19:36.913676    5780 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:36.913842    5780 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:36.913857    5780 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:41.916033    5780 start.go:364] acquiring machines lock for no-preload-133000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:41.916497    5780 start.go:368] acquired machines lock for "no-preload-133000" in 380.75µs
	I0610 10:19:41.916676    5780 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:41.916695    5780 fix.go:55] fixHost starting: 
	I0610 10:19:41.917422    5780 fix.go:103] recreateIfNeeded on no-preload-133000: state=Stopped err=<nil>
	W0610 10:19:41.917449    5780 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:41.922156    5780 out.go:177] * Restarting existing qemu2 VM for "no-preload-133000" ...
	I0610 10:19:41.926233    5780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:21:03:35:e7:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/no-preload-133000/disk.qcow2
	I0610 10:19:41.934901    5780 main.go:141] libmachine: STDOUT: 
	I0610 10:19:41.934946    5780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:41.935032    5780 fix.go:57] fixHost completed within 18.337917ms
	I0610 10:19:41.935051    5780 start.go:83] releasing machines lock for "no-preload-133000", held for 18.533333ms
	W0610 10:19:41.935243    5780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-133000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-133000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:41.952025    5780 out.go:177] 
	W0610 10:19:41.956171    5780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:41.956226    5780 out.go:239] * 
	* 
	W0610 10:19:41.958158    5780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:41.965094    5780 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-133000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (51.184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-133000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (34.287583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-133000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-133000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-133000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.867417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-133000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-133000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (31.837542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-133000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-133000 "sudo crictl images -o json": exit status 89 (37.136ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-133000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-133000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-133000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (28.929333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-133000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-133000 --alsologtostderr -v=1: exit status 89 (48.573542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-133000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:42.219096    5799 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:42.219204    5799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:42.219207    5799 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:42.219210    5799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:42.219284    5799 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:42.219482    5799 out.go:303] Setting JSON to false
	I0610 10:19:42.219492    5799 mustload.go:65] Loading cluster: no-preload-133000
	I0610 10:19:42.219669    5799 config.go:182] Loaded profile config "no-preload-133000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:42.231538    5799 out.go:177] * The control plane node must be running for this command
	I0610 10:19:42.236183    5799 out.go:177]   To start a cluster, run: "minikube start -p no-preload-133000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-133000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (27.902542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (27.901834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-133000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (10.982344459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-680000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-680000 in cluster default-k8s-diff-port-680000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-680000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:42.922936    5837 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:42.923060    5837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:42.923063    5837 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:42.923065    5837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:42.923132    5837 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:42.924143    5837 out.go:303] Setting JSON to false
	I0610 10:19:42.939201    5837 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4753,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:42.939267    5837 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:42.949063    5837 out.go:177] * [default-k8s-diff-port-680000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:42.952974    5837 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:42.953037    5837 notify.go:220] Checking for updates...
	I0610 10:19:42.960945    5837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:42.964963    5837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:42.968995    5837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:42.972961    5837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:42.978033    5837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:42.982149    5837 config.go:182] Loaded profile config "embed-certs-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:42.982192    5837 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:42.985967    5837 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:42.992978    5837 start.go:297] selected driver: qemu2
	I0610 10:19:42.992985    5837 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:42.992994    5837 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:42.994895    5837 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 10:19:42.998936    5837 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:43.003065    5837 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:43.003089    5837 cni.go:84] Creating CNI manager for ""
	I0610 10:19:43.003096    5837 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:43.003100    5837 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:19:43.003105    5837 start_flags.go:319] config:
	{Name:default-k8s-diff-port-680000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0610 10:19:43.003185    5837 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:43.011949    5837 out.go:177] * Starting control plane node default-k8s-diff-port-680000 in cluster default-k8s-diff-port-680000
	I0610 10:19:43.014935    5837 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:43.014958    5837 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:43.014972    5837 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:43.015016    5837 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:43.015022    5837 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:43.015090    5837 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/default-k8s-diff-port-680000/config.json ...
	I0610 10:19:43.015103    5837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/default-k8s-diff-port-680000/config.json: {Name:mk6d3f9f634aa3378e730a6eeaf3efbd8e46353b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:43.015303    5837 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:43.015315    5837 start.go:364] acquiring machines lock for default-k8s-diff-port-680000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:44.219119    5837 start.go:368] acquired machines lock for "default-k8s-diff-port-680000" in 1.203780125s
	I0610 10:19:44.219311    5837 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:44.219666    5837 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:44.227813    5837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:44.276286    5837 start.go:159] libmachine.API.Create for "default-k8s-diff-port-680000" (driver="qemu2")
	I0610 10:19:44.276348    5837 client.go:168] LocalClient.Create starting
	I0610 10:19:44.276487    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:44.276529    5837 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:44.276558    5837 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:44.276645    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:44.276676    5837 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:44.276691    5837 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:44.277294    5837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:44.400094    5837 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:44.500477    5837 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:44.500486    5837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:44.500663    5837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:44.510259    5837 main.go:141] libmachine: STDOUT: 
	I0610 10:19:44.510279    5837 main.go:141] libmachine: STDERR: 
	I0610 10:19:44.510351    5837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2 +20000M
	I0610 10:19:44.518283    5837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:44.518300    5837 main.go:141] libmachine: STDERR: 
	I0610 10:19:44.518320    5837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:44.518330    5837 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:44.518371    5837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b7:0f:9d:39:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:44.520079    5837 main.go:141] libmachine: STDOUT: 
	I0610 10:19:44.520095    5837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:44.520114    5837 client.go:171] LocalClient.Create took 243.763375ms
	I0610 10:19:46.522250    5837 start.go:128] duration metric: createHost completed in 2.302578166s
	I0610 10:19:46.522326    5837 start.go:83] releasing machines lock for "default-k8s-diff-port-680000", held for 2.30320775s
	W0610 10:19:46.522420    5837 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:46.540970    5837 out.go:177] * Deleting "default-k8s-diff-port-680000" in qemu2 ...
	W0610 10:19:46.563612    5837 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:46.563640    5837 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:51.565021    5837 start.go:364] acquiring machines lock for default-k8s-diff-port-680000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:51.583423    5837 start.go:368] acquired machines lock for "default-k8s-diff-port-680000" in 18.338ms
	I0610 10:19:51.583471    5837 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:51.583643    5837 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:51.591937    5837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:51.635692    5837 start.go:159] libmachine.API.Create for "default-k8s-diff-port-680000" (driver="qemu2")
	I0610 10:19:51.635748    5837 client.go:168] LocalClient.Create starting
	I0610 10:19:51.635871    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:51.635923    5837 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:51.635944    5837 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:51.636049    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:51.636078    5837 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:51.636095    5837 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:51.636610    5837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:51.760852    5837 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:51.819376    5837 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:51.819385    5837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:51.819538    5837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:51.828494    5837 main.go:141] libmachine: STDOUT: 
	I0610 10:19:51.828513    5837 main.go:141] libmachine: STDERR: 
	I0610 10:19:51.828564    5837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2 +20000M
	I0610 10:19:51.836476    5837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:51.836494    5837 main.go:141] libmachine: STDERR: 
	I0610 10:19:51.836512    5837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:51.836520    5837 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:51.836560    5837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:52:3f:e8:60:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:51.838340    5837 main.go:141] libmachine: STDOUT: 
	I0610 10:19:51.838374    5837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:51.838387    5837 client.go:171] LocalClient.Create took 202.634875ms
	I0610 10:19:53.840567    5837 start.go:128] duration metric: createHost completed in 2.256929s
	I0610 10:19:53.840681    5837 start.go:83] releasing machines lock for "default-k8s-diff-port-680000", held for 2.2572555s
	W0610 10:19:53.841042    5837 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-680000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-680000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:53.854567    5837 out.go:177] 
	W0610 10:19:53.857583    5837 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:53.857638    5837 out.go:239] * 
	* 
	W0610 10:19:53.860118    5837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:53.868473    5837 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (49.99125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-315000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-315000 create -f testdata/busybox.yaml: exit status 1 (31.672375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-315000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (32.466458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (31.737916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-315000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-315000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-315000 describe deploy/metrics-server -n kube-system: exit status 1 (28.492083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-315000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (28.802833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.945321333s)

                                                
                                                
-- stdout --
	* [embed-certs-315000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-315000 in cluster embed-certs-315000
	* Restarting existing qemu2 VM for "embed-certs-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:44.705227    5864 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:44.705373    5864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:44.705376    5864 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:44.705378    5864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:44.705451    5864 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:44.706448    5864 out.go:303] Setting JSON to false
	I0610 10:19:44.721781    5864 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4755,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:44.721844    5864 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:44.726874    5864 out.go:177] * [embed-certs-315000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:44.729735    5864 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:44.737838    5864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:44.729786    5864 notify.go:220] Checking for updates...
	I0610 10:19:44.741649    5864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:44.744802    5864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:44.747808    5864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:44.750817    5864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:44.754029    5864 config.go:182] Loaded profile config "embed-certs-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:44.754283    5864 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:44.758758    5864 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:19:44.765746    5864 start.go:297] selected driver: qemu2
	I0610 10:19:44.765755    5864 start.go:875] validating driver "qemu2" against &{Name:embed-certs-315000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:44.765829    5864 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:44.767696    5864 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:44.767717    5864 cni.go:84] Creating CNI manager for ""
	I0610 10:19:44.767724    5864 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:44.767729    5864 start_flags.go:319] config:
	{Name:embed-certs-315000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-315000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:44.767810    5864 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:44.773685    5864 out.go:177] * Starting control plane node embed-certs-315000 in cluster embed-certs-315000
	I0610 10:19:44.777763    5864 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:44.777796    5864 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:44.777809    5864 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:44.777865    5864 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:44.777870    5864 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:44.777934    5864 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/embed-certs-315000/config.json ...
	I0610 10:19:44.778275    5864 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:44.778284    5864 start.go:364] acquiring machines lock for embed-certs-315000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:46.522467    5864 start.go:368] acquired machines lock for "embed-certs-315000" in 1.744186959s
	I0610 10:19:46.522673    5864 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:46.522707    5864 fix.go:55] fixHost starting: 
	I0610 10:19:46.523349    5864 fix.go:103] recreateIfNeeded on embed-certs-315000: state=Stopped err=<nil>
	W0610 10:19:46.523388    5864 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:46.532971    5864 out.go:177] * Restarting existing qemu2 VM for "embed-certs-315000" ...
	I0610 10:19:46.544153    5864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:2d:00:78:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:46.556303    5864 main.go:141] libmachine: STDOUT: 
	I0610 10:19:46.556361    5864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:46.556492    5864 fix.go:57] fixHost completed within 33.78975ms
	I0610 10:19:46.556512    5864 start.go:83] releasing machines lock for "embed-certs-315000", held for 34.007291ms
	W0610 10:19:46.556543    5864 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:46.556725    5864 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:46.556740    5864 start.go:702] Will try again in 5 seconds ...
	I0610 10:19:51.558872    5864 start.go:364] acquiring machines lock for embed-certs-315000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:51.559501    5864 start.go:368] acquired machines lock for "embed-certs-315000" in 530.708µs
	I0610 10:19:51.559699    5864 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:51.559719    5864 fix.go:55] fixHost starting: 
	I0610 10:19:51.560461    5864 fix.go:103] recreateIfNeeded on embed-certs-315000: state=Stopped err=<nil>
	W0610 10:19:51.560489    5864 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:51.569934    5864 out.go:177] * Restarting existing qemu2 VM for "embed-certs-315000" ...
	I0610 10:19:51.574139    5864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:2d:00:78:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/embed-certs-315000/disk.qcow2
	I0610 10:19:51.583198    5864 main.go:141] libmachine: STDOUT: 
	I0610 10:19:51.583248    5864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:51.583330    5864 fix.go:57] fixHost completed within 23.613375ms
	I0610 10:19:51.583346    5864 start.go:83] releasing machines lock for "embed-certs-315000", held for 23.823625ms
	W0610 10:19:51.583481    5864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:51.598941    5864 out.go:177] 
	W0610 10:19:51.602985    5864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:51.603002    5864 out.go:239] * 
	* 
	W0610 10:19:51.604695    5864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:19:51.613928    5864 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-315000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (48.223958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-315000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (33.458167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-315000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.658167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (31.94675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-315000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-315000 "sudo crictl images -o json": exit status 89 (38.443375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-315000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-315000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-315000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (28.14225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-315000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-315000 --alsologtostderr -v=1: exit status 89 (41.427917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-315000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:51.863635    5889 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:51.863777    5889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:51.863780    5889 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:51.863782    5889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:51.863856    5889 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:51.864331    5889 out.go:303] Setting JSON to false
	I0610 10:19:51.864343    5889 mustload.go:65] Loading cluster: embed-certs-315000
	I0610 10:19:51.864929    5889 config.go:182] Loaded profile config "embed-certs-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:51.869896    5889 out.go:177] * The control plane node must be running for this command
	I0610 10:19:51.874073    5889 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-315000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-315000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (27.926334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (27.887833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.1764345s)

                                                
                                                
-- stdout --
	* [newest-cni-785000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-785000 in cluster newest-cni-785000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:52.321386    5912 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:52.321494    5912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:52.321497    5912 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:52.321500    5912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:52.321574    5912 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:52.322618    5912 out.go:303] Setting JSON to false
	I0610 10:19:52.337684    5912 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4763,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:52.337762    5912 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:52.342928    5912 out.go:177] * [newest-cni-785000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:52.349984    5912 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:52.350025    5912 notify.go:220] Checking for updates...
	I0610 10:19:52.353895    5912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:52.356991    5912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:52.359913    5912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:52.362894    5912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:52.365900    5912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:52.369163    5912 config.go:182] Loaded profile config "default-k8s-diff-port-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:52.369211    5912 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:52.373871    5912 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 10:19:52.379856    5912 start.go:297] selected driver: qemu2
	I0610 10:19:52.379861    5912 start.go:875] validating driver "qemu2" against <nil>
	I0610 10:19:52.379869    5912 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:52.381723    5912 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0610 10:19:52.381746    5912 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0610 10:19:52.389868    5912 out.go:177] * Automatically selected the socket_vmnet network
	I0610 10:19:52.393021    5912 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 10:19:52.393038    5912 cni.go:84] Creating CNI manager for ""
	I0610 10:19:52.393044    5912 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:52.393049    5912 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:19:52.393055    5912 start_flags.go:319] config:
	{Name:newest-cni-785000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:52.393147    5912 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:52.400934    5912 out.go:177] * Starting control plane node newest-cni-785000 in cluster newest-cni-785000
	I0610 10:19:52.404849    5912 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:52.404883    5912 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:52.404895    5912 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:52.404945    5912 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:52.404950    5912 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:52.405003    5912 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/newest-cni-785000/config.json ...
	I0610 10:19:52.405015    5912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/newest-cni-785000/config.json: {Name:mke52d693eb438e45c48ed0d8e08fefd2edd6c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:19:52.405205    5912 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:52.405218    5912 start.go:364] acquiring machines lock for newest-cni-785000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:53.840802    5912 start.go:368] acquired machines lock for "newest-cni-785000" in 1.435566708s
	I0610 10:19:53.841005    5912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:19:53.841261    5912 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:19:53.850514    5912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:19:53.895810    5912 start.go:159] libmachine.API.Create for "newest-cni-785000" (driver="qemu2")
	I0610 10:19:53.895857    5912 client.go:168] LocalClient.Create starting
	I0610 10:19:53.895971    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:19:53.896011    5912 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:53.896028    5912 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:53.896088    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:19:53.896115    5912 main.go:141] libmachine: Decoding PEM data...
	I0610 10:19:53.896130    5912 main.go:141] libmachine: Parsing certificate...
	I0610 10:19:53.896699    5912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:19:54.022812    5912 main.go:141] libmachine: Creating SSH key...
	I0610 10:19:54.091237    5912 main.go:141] libmachine: Creating Disk image...
	I0610 10:19:54.091249    5912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:19:54.091424    5912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:19:54.100587    5912 main.go:141] libmachine: STDOUT: 
	I0610 10:19:54.100608    5912 main.go:141] libmachine: STDERR: 
	I0610 10:19:54.100663    5912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2 +20000M
	I0610 10:19:54.108534    5912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:19:54.108552    5912 main.go:141] libmachine: STDERR: 
	I0610 10:19:54.108572    5912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:19:54.108578    5912 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:19:54.108622    5912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:50:e5:1b:d3:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:19:54.110531    5912 main.go:141] libmachine: STDOUT: 
	I0610 10:19:54.110553    5912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:54.110576    5912 client.go:171] LocalClient.Create took 214.715792ms
	I0610 10:19:56.112727    5912 start.go:128] duration metric: createHost completed in 2.271470583s
	I0610 10:19:56.112803    5912 start.go:83] releasing machines lock for "newest-cni-785000", held for 2.272003042s
	W0610 10:19:56.112864    5912 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:56.125412    5912 out.go:177] * Deleting "newest-cni-785000" in qemu2 ...
	W0610 10:19:56.148012    5912 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:56.148047    5912 start.go:702] Will try again in 5 seconds ...
	I0610 10:20:01.150191    5912 start.go:364] acquiring machines lock for newest-cni-785000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:20:01.165756    5912 start.go:368] acquired machines lock for "newest-cni-785000" in 15.461916ms
	I0610 10:20:01.165852    5912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 10:20:01.166243    5912 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 10:20:01.177476    5912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:20:01.223455    5912 start.go:159] libmachine.API.Create for "newest-cni-785000" (driver="qemu2")
	I0610 10:20:01.223491    5912 client.go:168] LocalClient.Create starting
	I0610 10:20:01.223596    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/ca.pem
	I0610 10:20:01.223632    5912 main.go:141] libmachine: Decoding PEM data...
	I0610 10:20:01.223657    5912 main.go:141] libmachine: Parsing certificate...
	I0610 10:20:01.223752    5912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16578-1150/.minikube/certs/cert.pem
	I0610 10:20:01.223783    5912 main.go:141] libmachine: Decoding PEM data...
	I0610 10:20:01.223799    5912 main.go:141] libmachine: Parsing certificate...
	I0610 10:20:01.224371    5912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 10:20:01.349784    5912 main.go:141] libmachine: Creating SSH key...
	I0610 10:20:01.404320    5912 main.go:141] libmachine: Creating Disk image...
	I0610 10:20:01.404334    5912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 10:20:01.404523    5912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:20:01.414182    5912 main.go:141] libmachine: STDOUT: 
	I0610 10:20:01.414219    5912 main.go:141] libmachine: STDERR: 
	I0610 10:20:01.414282    5912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2 +20000M
	I0610 10:20:01.422234    5912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 10:20:01.422254    5912 main.go:141] libmachine: STDERR: 
	I0610 10:20:01.422267    5912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:20:01.422274    5912 main.go:141] libmachine: Starting QEMU VM...
	I0610 10:20:01.422316    5912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:53:13:41:b5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:20:01.423991    5912 main.go:141] libmachine: STDOUT: 
	I0610 10:20:01.424005    5912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:20:01.424022    5912 client.go:171] LocalClient.Create took 200.529125ms
	I0610 10:20:03.426258    5912 start.go:128] duration metric: createHost completed in 2.260006917s
	I0610 10:20:03.426322    5912 start.go:83] releasing machines lock for "newest-cni-785000", held for 2.260575167s
	W0610 10:20:03.426641    5912 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:20:03.437387    5912 out.go:177] 
	W0610 10:20:03.442590    5912 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:20:03.442644    5912 out.go:239] * 
	* 
	W0610 10:20:03.445082    5912 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:20:03.454380    5912 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (72.436834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-680000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-680000 create -f testdata/busybox.yaml: exit status 1 (31.2785ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-680000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (32.877959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (31.766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-680000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-680000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-680000 describe deploy/metrics-server -n kube-system: exit status 1 (27.568084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-680000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (28.446459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.914073291s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-680000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-680000 in cluster default-k8s-diff-port-680000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-680000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-680000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:19:54.314642    5939 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:19:54.314769    5939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:54.314771    5939 out.go:309] Setting ErrFile to fd 2...
	I0610 10:19:54.314774    5939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:19:54.314836    5939 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:19:54.315773    5939 out.go:303] Setting JSON to false
	I0610 10:19:54.331015    5939 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4765,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:19:54.331082    5939 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:19:54.335545    5939 out.go:177] * [default-k8s-diff-port-680000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:19:54.341464    5939 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:19:54.345531    5939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:19:54.341512    5939 notify.go:220] Checking for updates...
	I0610 10:19:54.351498    5939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:19:54.354515    5939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:19:54.357524    5939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:19:54.358924    5939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:19:54.361785    5939 config.go:182] Loaded profile config "default-k8s-diff-port-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:19:54.362032    5939 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:19:54.366504    5939 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:19:54.371451    5939 start.go:297] selected driver: qemu2
	I0610 10:19:54.371455    5939 start.go:875] validating driver "qemu2" against &{Name:default-k8s-diff-port-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:54.371512    5939 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:19:54.373334    5939 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:19:54.373358    5939 cni.go:84] Creating CNI manager for ""
	I0610 10:19:54.373365    5939 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:19:54.373369    5939 start_flags.go:319] config:
	{Name:default-k8s-diff-port-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-6800
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:19:54.373447    5939 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:19:54.380502    5939 out.go:177] * Starting control plane node default-k8s-diff-port-680000 in cluster default-k8s-diff-port-680000
	I0610 10:19:54.384531    5939 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:19:54.384548    5939 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:19:54.384566    5939 cache.go:57] Caching tarball of preloaded images
	I0610 10:19:54.384628    5939 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:19:54.384634    5939 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:19:54.384707    5939 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/default-k8s-diff-port-680000/config.json ...
	I0610 10:19:54.385061    5939 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:19:54.385071    5939 start.go:364] acquiring machines lock for default-k8s-diff-port-680000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:19:56.112993    5939 start.go:368] acquired machines lock for "default-k8s-diff-port-680000" in 1.727847209s
	I0610 10:19:56.113103    5939 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:19:56.113145    5939 fix.go:55] fixHost starting: 
	I0610 10:19:56.113847    5939 fix.go:103] recreateIfNeeded on default-k8s-diff-port-680000: state=Stopped err=<nil>
	W0610 10:19:56.113892    5939 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:19:56.122493    5939 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-680000" ...
	I0610 10:19:56.129698    5939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:52:3f:e8:60:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:19:56.139596    5939 main.go:141] libmachine: STDOUT: 
	I0610 10:19:56.139646    5939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:19:56.139772    5939 fix.go:57] fixHost completed within 26.6335ms
	I0610 10:19:56.139793    5939 start.go:83] releasing machines lock for "default-k8s-diff-port-680000", held for 26.752708ms
	W0610 10:19:56.139823    5939 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:19:56.139983    5939 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:19:56.139999    5939 start.go:702] Will try again in 5 seconds ...
	I0610 10:20:01.142177    5939 start.go:364] acquiring machines lock for default-k8s-diff-port-680000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:20:01.142657    5939 start.go:368] acquired machines lock for "default-k8s-diff-port-680000" in 379.833µs
	I0610 10:20:01.142826    5939 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:20:01.142845    5939 fix.go:55] fixHost starting: 
	I0610 10:20:01.143641    5939 fix.go:103] recreateIfNeeded on default-k8s-diff-port-680000: state=Stopped err=<nil>
	W0610 10:20:01.143668    5939 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:20:01.152434    5939 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-680000" ...
	I0610 10:20:01.156594    5939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:52:3f:e8:60:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/default-k8s-diff-port-680000/disk.qcow2
	I0610 10:20:01.165462    5939 main.go:141] libmachine: STDOUT: 
	I0610 10:20:01.165531    5939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:20:01.165631    5939 fix.go:57] fixHost completed within 22.785292ms
	I0610 10:20:01.165654    5939 start.go:83] releasing machines lock for "default-k8s-diff-port-680000", held for 22.973958ms
	W0610 10:20:01.165883    5939 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-680000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-680000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:20:01.177476    5939 out.go:177] 
	W0610 10:20:01.181493    5939 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:20:01.181546    5939 out.go:239] * 
	* 
	W0610 10:20:01.184504    5939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:20:01.194411    5939 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-680000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (52.280375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-680000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (33.896125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-680000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.109041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-680000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-680000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (32.505708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-680000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-680000 "sudo crictl images -o json": exit status 89 (41.579459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-680000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-680000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-680000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (28.766958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-680000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-680000 --alsologtostderr -v=1: exit status 89 (40.911542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-680000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:20:01.450985    5963 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:20:01.451164    5963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:01.451167    5963 out.go:309] Setting ErrFile to fd 2...
	I0610 10:20:01.451169    5963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:01.451238    5963 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:20:01.451448    5963 out.go:303] Setting JSON to false
	I0610 10:20:01.451456    5963 mustload.go:65] Loading cluster: default-k8s-diff-port-680000
	I0610 10:20:01.451636    5963 config.go:182] Loaded profile config "default-k8s-diff-port-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:20:01.456504    5963 out.go:177] * The control plane node must be running for this command
	I0610 10:20:01.460480    5963 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-680000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-680000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (27.942959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (28.132333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.174048667s)

                                                
                                                
-- stdout --
	* [newest-cni-785000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-785000 in cluster newest-cni-785000
	* Restarting existing qemu2 VM for "newest-cni-785000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-785000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:20:03.797274    5998 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:20:03.797398    5998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:03.797401    5998 out.go:309] Setting ErrFile to fd 2...
	I0610 10:20:03.797403    5998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:03.797473    5998 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:20:03.798509    5998 out.go:303] Setting JSON to false
	I0610 10:20:03.813753    5998 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4774,"bootTime":1686412829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 10:20:03.813811    5998 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 10:20:03.822880    5998 out.go:177] * [newest-cni-785000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 10:20:03.826972    5998 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 10:20:03.830928    5998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 10:20:03.827036    5998 notify.go:220] Checking for updates...
	I0610 10:20:03.833922    5998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 10:20:03.836897    5998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:20:03.839939    5998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 10:20:03.842888    5998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:20:03.846133    5998 config.go:182] Loaded profile config "newest-cni-785000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:20:03.846374    5998 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 10:20:03.850903    5998 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 10:20:03.857925    5998 start.go:297] selected driver: qemu2
	I0610 10:20:03.857930    5998 start.go:875] validating driver "qemu2" against &{Name:newest-cni-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:20:03.857991    5998 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:20:03.859871    5998 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 10:20:03.859890    5998 cni.go:84] Creating CNI manager for ""
	I0610 10:20:03.859897    5998 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 10:20:03.859901    5998 start_flags.go:319] config:
	{Name:newest-cni-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-785000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 10:20:03.859977    5998 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:20:03.865845    5998 out.go:177] * Starting control plane node newest-cni-785000 in cluster newest-cni-785000
	I0610 10:20:03.869883    5998 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 10:20:03.869915    5998 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 10:20:03.869928    5998 cache.go:57] Caching tarball of preloaded images
	I0610 10:20:03.869976    5998 preload.go:174] Found /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 10:20:03.869981    5998 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 10:20:03.870046    5998 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/newest-cni-785000/config.json ...
	I0610 10:20:03.870339    5998 cache.go:195] Successfully downloaded all kic artifacts
	I0610 10:20:03.870349    5998 start.go:364] acquiring machines lock for newest-cni-785000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:20:03.870377    5998 start.go:368] acquired machines lock for "newest-cni-785000" in 23.709µs
	I0610 10:20:03.870391    5998 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:20:03.870397    5998 fix.go:55] fixHost starting: 
	I0610 10:20:03.870514    5998 fix.go:103] recreateIfNeeded on newest-cni-785000: state=Stopped err=<nil>
	W0610 10:20:03.870523    5998 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:20:03.874860    5998 out.go:177] * Restarting existing qemu2 VM for "newest-cni-785000" ...
	I0610 10:20:03.882866    5998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:53:13:41:b5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:20:03.884641    5998 main.go:141] libmachine: STDOUT: 
	I0610 10:20:03.884660    5998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:20:03.884692    5998 fix.go:57] fixHost completed within 14.296125ms
	I0610 10:20:03.884697    5998 start.go:83] releasing machines lock for "newest-cni-785000", held for 14.316ms
	W0610 10:20:03.884703    5998 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:20:03.884738    5998 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:20:03.884742    5998 start.go:702] Will try again in 5 seconds ...
	I0610 10:20:08.886822    5998 start.go:364] acquiring machines lock for newest-cni-785000: {Name:mk49caeabbc8e0bfd7d71eba2b6195cc476054c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:20:08.887252    5998 start.go:368] acquired machines lock for "newest-cni-785000" in 345.084µs
	I0610 10:20:08.887407    5998 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:20:08.887426    5998 fix.go:55] fixHost starting: 
	I0610 10:20:08.888149    5998 fix.go:103] recreateIfNeeded on newest-cni-785000: state=Stopped err=<nil>
	W0610 10:20:08.888174    5998 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 10:20:08.895503    5998 out.go:177] * Restarting existing qemu2 VM for "newest-cni-785000" ...
	I0610 10:20:08.899743    5998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:53:13:41:b5:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/newest-cni-785000/disk.qcow2
	I0610 10:20:08.909055    5998 main.go:141] libmachine: STDOUT: 
	I0610 10:20:08.909106    5998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 10:20:08.909183    5998 fix.go:57] fixHost completed within 21.758709ms
	I0610 10:20:08.909201    5998 start.go:83] releasing machines lock for "newest-cni-785000", held for 21.926208ms
	W0610 10:20:08.909400    5998 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-785000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-785000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 10:20:08.917534    5998 out.go:177] 
	W0610 10:20:08.920599    5998 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 10:20:08.920615    5998 out.go:239] * 
	* 
	W0610 10:20:08.922593    5998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:20:08.931515    5998 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-785000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (69.293917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-785000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-785000 "sudo crictl images -o json": exit status 89 (46.194542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-785000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-785000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-785000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (29.473208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-785000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-785000 --alsologtostderr -v=1: exit status 89 (40.101375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-785000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:20:09.119029    6011 out.go:296] Setting OutFile to fd 1 ...
	I0610 10:20:09.119173    6011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:09.119176    6011 out.go:309] Setting ErrFile to fd 2...
	I0610 10:20:09.119179    6011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:09.119261    6011 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 10:20:09.119470    6011 out.go:303] Setting JSON to false
	I0610 10:20:09.119479    6011 mustload.go:65] Loading cluster: newest-cni-785000
	I0610 10:20:09.119649    6011 config.go:182] Loaded profile config "newest-cni-785000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 10:20:09.122525    6011 out.go:177] * The control plane node must be running for this command
	I0610 10:20:09.126435    6011 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-785000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-785000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (29.115125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (29.422958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-785000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (151/258)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.2/json-events 19.82
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.37
22 TestAddons/Setup 404.97
31 TestAddons/parallel/Headlamp 11.38
32 TestAddons/parallel/CloudSpanner 5.22
35 TestAddons/serial/GCPAuth/Namespaces 0.08
36 TestAddons/StoppedEnableDisable 12.27
43 TestHyperKitDriverInstallOrUpdate 8.68
46 TestErrorSpam/setup 28.88
47 TestErrorSpam/start 0.33
48 TestErrorSpam/status 0.26
49 TestErrorSpam/pause 0.62
50 TestErrorSpam/unpause 0.6
51 TestErrorSpam/stop 3.24
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 45.99
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 35.36
58 TestFunctional/serial/KubeContext 0.03
59 TestFunctional/serial/KubectlGetPods 0.05
62 TestFunctional/serial/CacheCmd/cache/add_remote 6.04
63 TestFunctional/serial/CacheCmd/cache/add_local 1.3
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.03
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.3
68 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/serial/MinikubeKubectlCmd 0.5
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.62
71 TestFunctional/serial/ExtraConfig 36.83
72 TestFunctional/serial/ComponentHealth 0.04
73 TestFunctional/serial/LogsCmd 0.66
74 TestFunctional/serial/LogsFileCmd 0.62
76 TestFunctional/parallel/ConfigCmd 0.21
77 TestFunctional/parallel/DashboardCmd 7.91
78 TestFunctional/parallel/DryRun 0.22
79 TestFunctional/parallel/InternationalLanguage 0.11
80 TestFunctional/parallel/StatusCmd 0.29
85 TestFunctional/parallel/AddonsCmd 0.12
86 TestFunctional/parallel/PersistentVolumeClaim 24.18
88 TestFunctional/parallel/SSHCmd 0.15
89 TestFunctional/parallel/CpCmd 0.31
91 TestFunctional/parallel/FileSync 0.08
92 TestFunctional/parallel/CertSync 0.46
96 TestFunctional/parallel/NodeLabels 0.04
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
100 TestFunctional/parallel/License 0.53
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.09
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
112 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
113 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
115 TestFunctional/parallel/ServiceCmd/DeployApp 6.1
116 TestFunctional/parallel/ServiceCmd/List 0.32
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
119 TestFunctional/parallel/ServiceCmd/Format 0.11
120 TestFunctional/parallel/ServiceCmd/URL 0.11
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.2
122 TestFunctional/parallel/ProfileCmd/profile_list 0.16
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
124 TestFunctional/parallel/MountCmd/any-port 6.05
125 TestFunctional/parallel/MountCmd/specific-port 0.82
126 TestFunctional/parallel/MountCmd/VerifyCleanup 0.89
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.11
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.09
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.9
132 TestFunctional/parallel/ImageCommands/Setup 2.53
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.17
134 TestFunctional/parallel/Version/short 0.04
135 TestFunctional/parallel/Version/components 0.24
136 TestFunctional/parallel/DockerEnv/bash 0.43
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.62
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.44
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.18
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.58
143 TestFunctional/delete_addon-resizer_images 0.12
144 TestFunctional/delete_my-image_image 0.04
145 TestFunctional/delete_minikube_cached_images 0.04
149 TestImageBuild/serial/Setup 30.18
150 TestImageBuild/serial/NormalBuild 2.22
152 TestImageBuild/serial/BuildWithDockerIgnore 0.11
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
156 TestIngressAddonLegacy/StartLegacyK8sCluster 86
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.82
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.18
163 TestJSONOutput/start/Command 45.7
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.3
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.23
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 12.08
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.33
191 TestMainNoArgs 0.03
195 TestMountStart/serial/StartWithMountFirst 18.38
196 TestMountStart/serial/VerifyMountFirst 0.19
197 TestMountStart/serial/StartWithMountSecond 18.31
198 TestMountStart/serial/VerifyMountSecond 0.19
199 TestMountStart/serial/DeleteFirst 0.1
203 TestMultiNode/serial/FreshStart2Nodes 82.14
204 TestMultiNode/serial/DeployApp2Nodes 4.63
205 TestMultiNode/serial/PingHostFrom2Pods 0.54
206 TestMultiNode/serial/AddNode 35.82
207 TestMultiNode/serial/ProfileList 0.17
208 TestMultiNode/serial/CopyFile 2.57
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
258 TestNoKubernetes/serial/ProfileList 0.14
259 TestNoKubernetes/serial/Stop 0.07
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
279 TestStartStop/group/old-k8s-version/serial/Stop 0.06
280 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
290 TestStartStop/group/no-preload/serial/Stop 0.06
291 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
301 TestStartStop/group/embed-certs/serial/Stop 0.07
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
319 TestStartStop/group/newest-cni/serial/DeployApp 0
320 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
321 TestStartStop/group/newest-cni/serial/Stop 0.07
322 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
324 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-879000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-879000: exit status 85 (94.323625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-879000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:09.082342    1566 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:09.082479    1566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:09.082482    1566 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:09.082484    1566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:09.082556    1566 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	W0610 09:21:09.082614    1566 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: no such file or directory
	I0610 09:21:09.083773    1566 out.go:303] Setting JSON to true
	I0610 09:21:09.100627    1566 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1240,"bootTime":1686412829,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:09.100688    1566 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:09.105725    1566 out.go:97] [download-only-879000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:09.108740    1566 out.go:169] MINIKUBE_LOCATION=16578
	I0610 09:21:09.105888    1566 notify.go:220] Checking for updates...
	W0610 09:21:09.105902    1566 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 09:21:09.113627    1566 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:09.116758    1566 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:09.119697    1566 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:09.122717    1566 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	W0610 09:21:09.127014    1566 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 09:21:09.127237    1566 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:21:09.132703    1566 out.go:97] Using the qemu2 driver based on user configuration
	I0610 09:21:09.132723    1566 start.go:297] selected driver: qemu2
	I0610 09:21:09.132727    1566 start.go:875] validating driver "qemu2" against <nil>
	I0610 09:21:09.132797    1566 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 09:21:09.136697    1566 out.go:169] Automatically selected the socket_vmnet network
	I0610 09:21:09.142009    1566 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 09:21:09.142085    1566 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 09:21:09.142120    1566 cni.go:84] Creating CNI manager for ""
	I0610 09:21:09.142136    1566 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 09:21:09.142140    1566 start_flags.go:319] config:
	{Name:download-only-879000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-879000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:09.142297    1566 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:09.146699    1566 out.go:97] Downloading VM boot image ...
	I0610 09:21:09.146716    1566 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso
	I0610 09:21:19.325617    1566 out.go:97] Starting control plane node download-only-879000 in cluster download-only-879000
	I0610 09:21:19.325645    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:19.426955    1566 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 09:21:19.427026    1566 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:19.427218    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:19.432366    1566 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 09:21:19.432375    1566 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:19.661311    1566 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 09:21:32.022545    1566 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:32.022682    1566 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:32.673657    1566 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 09:21:32.673847    1566 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/download-only-879000/config.json ...
	I0610 09:21:32.673866    1566 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/download-only-879000/config.json: {Name:mk8ea572823972a0ca150d4787089a831e408f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 09:21:32.674099    1566 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 09:21:32.674285    1566 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0610 09:21:33.582780    1566 out.go:169] 
	W0610 09:21:33.586665    1566 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16578-1150/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28 0x107d73f28] Decompressors:map[bz2:0x14000526928 gz:0x14000526980 tar:0x14000526930 tar.bz2:0x14000526940 tar.gz:0x14000526950 tar.xz:0x14000526960 tar.zst:0x14000526970 tbz2:0x14000526940 tgz:0x14000526950 txz:0x14000526960 tzst:0x14000526970 xz:0x14000526988 zip:0x14000526990 zst:0x140005269a0] Getters:map[file:0x140010a65a0 http:0x14000acc140 https:0x14000acc190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 09:21:33.586692    1566 out_reason.go:110] 
	W0610 09:21:33.593774    1566 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 09:21:33.597773    1566 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-879000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (19.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 : (19.816162667s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (19.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-879000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-879000: exit status 85 (81.398916ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-879000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.30.1 | 10 Jun 23 09:21 PDT |          |
	|         | -p download-only-879000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 09:21:33
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 09:21:33.783411    1578 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:21:33.783590    1578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:33.783592    1578 out.go:309] Setting ErrFile to fd 2...
	I0610 09:21:33.783595    1578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:21:33.783665    1578 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	W0610 09:21:33.783721    1578 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16578-1150/.minikube/config/config.json: no such file or directory
	I0610 09:21:33.784570    1578 out.go:303] Setting JSON to true
	I0610 09:21:33.799447    1578 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1264,"bootTime":1686412829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:21:33.799531    1578 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:21:33.804744    1578 out.go:97] [download-only-879000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:21:33.808739    1578 out.go:169] MINIKUBE_LOCATION=16578
	I0610 09:21:33.804821    1578 notify.go:220] Checking for updates...
	I0610 09:21:33.814729    1578 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:21:33.817748    1578 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:21:33.820670    1578 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:21:33.823691    1578 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	W0610 09:21:33.829669    1578 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 09:21:33.829966    1578 config.go:182] Loaded profile config "download-only-879000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0610 09:21:33.829990    1578 start.go:783] api.Load failed for download-only-879000: filestore "download-only-879000": Docker machine "download-only-879000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 09:21:33.830034    1578 driver.go:375] Setting default libvirt URI to qemu:///system
	W0610 09:21:33.830051    1578 start.go:783] api.Load failed for download-only-879000: filestore "download-only-879000": Docker machine "download-only-879000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 09:21:33.833689    1578 out.go:97] Using the qemu2 driver based on existing profile
	I0610 09:21:33.833698    1578 start.go:297] selected driver: qemu2
	I0610 09:21:33.833701    1578 start.go:875] validating driver "qemu2" against &{Name:download-only-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-879000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:33.835566    1578 cni.go:84] Creating CNI manager for ""
	I0610 09:21:33.835578    1578 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 09:21:33.835585    1578 start_flags.go:319] config:
	{Name:download-only-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-879000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:21:33.835664    1578 iso.go:125] acquiring lock: {Name:mk0a3e18b0ab39fea8fa845439dffee1684e89da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 09:21:33.838711    1578 out.go:97] Starting control plane node download-only-879000 in cluster download-only-879000
	I0610 09:21:33.838719    1578 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:34.058631    1578 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 09:21:34.058689    1578 cache.go:57] Caching tarball of preloaded images
	I0610 09:21:34.059360    1578 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 09:21:34.064647    1578 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0610 09:21:34.064718    1578 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0610 09:21:34.298207    1578 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4?checksum=md5:4271952d77a401a4cbcfc4225771d46f -> /Users/jenkins/minikube-integration/16578-1150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-879000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-879000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-025000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-025000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/Setup (404.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-098000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-098000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m44.974507875s)
--- PASS: TestAddons/Setup (404.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-098000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-6wqrt" [048c5a4e-b72c-4568-93a2-1d8ca6c859cd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-6wqrt" [048c5a4e-b72c-4568-93a2-1d8ca6c859cd] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.008015584s
--- PASS: TestAddons/parallel/Headlamp (11.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-jntl7" [23a2f7d4-3de4-4494-b3cc-57df20e4578b] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01026275s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-098000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-098000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-098000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-098000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-098000: (12.088146417s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-098000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-098000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-098000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.68s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0610 10:13:39.649874    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (8.68s)

                                                
                                    
x
+
TestErrorSpam/setup (28.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-370000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-370000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 --driver=qemu2 : (28.8831205s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (28.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 pause
--- PASS: TestErrorSpam/pause (0.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (3.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 stop: (3.072789625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-370000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-370000 stop
--- PASS: TestErrorSpam/stop (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16578-1150/.minikube/files/etc/test/nested/copy/1564/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-arm64 start -p functional-656000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.988175542s)
--- PASS: TestFunctional/serial/StartWithProxy (45.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-arm64 start -p functional-656000 --alsologtostderr -v=8: (35.358430792s)
functional_test.go:658: soft start took 35.359007292s for "functional-656000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-656000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:3.1: (2.202898625s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:3.3: (2.189293625s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 cache add registry.k8s.io/pause:latest: (1.65185975s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4078325165/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache add minikube-local-cache-test:functional-656000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache delete minikube-local-cache-test:functional-656000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-656000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (79.188667ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 cache reload: (1.047127042s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 kubectl -- --context functional-656000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-656000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.62s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-arm64 start -p functional-656000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.825560959s)
functional_test.go:756: restart took 36.825677041s for "functional-656000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-656000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3467568282/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 config get cpus: exit status 14 (29.125083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 config get cpus: exit status 14 (30.206375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-656000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-656000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3000: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-656000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.758958ms)

                                                
                                                
-- stdout --
	* [functional-656000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:52:34.690472    2988 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:52:34.690585    2988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.690588    2988 out.go:309] Setting ErrFile to fd 2...
	I0610 09:52:34.690590    2988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.690655    2988 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:52:34.691631    2988 out.go:303] Setting JSON to false
	I0610 09:52:34.706746    2988 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3125,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:52:34.706811    2988 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:52:34.711334    2988 out.go:177] * [functional-656000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 09:52:34.718280    2988 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:52:34.718345    2988 notify.go:220] Checking for updates...
	I0610 09:52:34.722331    2988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:52:34.725343    2988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:52:34.728298    2988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:52:34.731317    2988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:52:34.734287    2988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:52:34.737600    2988 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:52:34.737828    2988 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:52:34.742273    2988 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 09:52:34.749257    2988 start.go:297] selected driver: qemu2
	I0610 09:52:34.749261    2988 start.go:875] validating driver "qemu2" against &{Name:functional-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-656000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:52:34.749304    2988 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:52:34.755157    2988 out.go:177] 
	W0610 09:52:34.759280    2988 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 09:52:34.763252    2988 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-656000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-656000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.392ms)

                                                
                                                
-- stdout --
	* [functional-656000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 09:52:34.574669    2984 out.go:296] Setting OutFile to fd 1 ...
	I0610 09:52:34.574818    2984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.574821    2984 out.go:309] Setting ErrFile to fd 2...
	I0610 09:52:34.574823    2984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 09:52:34.574904    2984 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
	I0610 09:52:34.576268    2984 out.go:303] Setting JSON to false
	I0610 09:52:34.593689    2984 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3125,"bootTime":1686412829,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 09:52:34.593761    2984 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 09:52:34.598371    2984 out.go:177] * [functional-656000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	I0610 09:52:34.605341    2984 notify.go:220] Checking for updates...
	I0610 09:52:34.609269    2984 out.go:177]   - MINIKUBE_LOCATION=16578
	I0610 09:52:34.613318    2984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	I0610 09:52:34.614625    2984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 09:52:34.617292    2984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 09:52:34.620291    2984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	I0610 09:52:34.623353    2984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 09:52:34.626628    2984 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 09:52:34.626896    2984 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 09:52:34.631314    2984 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0610 09:52:34.638279    2984 start.go:297] selected driver: qemu2
	I0610 09:52:34.638285    2984 start.go:875] validating driver "qemu2" against &{Name:functional-656000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-656000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 09:52:34.638351    2984 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 09:52:34.645269    2984 out.go:177] 
	W0610 09:52:34.649281    2984 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 09:52:34.653284    2984 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a084998a-f902-48ce-9ea9-ded63d5ef783] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023541875s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-656000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-656000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-656000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-656000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8cb4e373-539a-479c-9712-a5e6333b726a] Pending
helpers_test.go:344: "sp-pod" [8cb4e373-539a-479c-9712-a5e6333b726a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8cb4e373-539a-479c-9712-a5e6333b726a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00892775s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-656000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-656000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-656000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b1293f7-cafb-4ee5-9233-7ece7b4ad9f9] Pending
helpers_test.go:344: "sp-pod" [4b1293f7-cafb-4ee5-9233-7ece7b4ad9f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b1293f7-cafb-4ee5-9233-7ece7b4ad9f9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.012863959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-656000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh -n functional-656000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 cp functional-656000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1719203093/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh -n functional-656000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/1564/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /etc/test/nested/copy/1564/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/1564.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /etc/ssl/certs/1564.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/1564.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /usr/share/ca-certificates/1564.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/15642.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /etc/ssl/certs/15642.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/15642.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /usr/share/ca-certificates/15642.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-656000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh "sudo systemctl is-active crio": exit status 1 (118.918792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2841: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-656000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9db45d0f-2da4-40c8-b792-848bb65fc193] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9db45d0f-2da4-40c8-b792-848bb65fc193] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.011734792s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-656000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.156.225 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-656000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-656000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-656000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-kwc7h" [9ab30c40-3e2d-4d1b-97d3-02670664ef59] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-kwc7h" [9ab30c40-3e2d-4d1b-97d3-02670664ef59] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.011006958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service list -o json
functional_test.go:1492: Took "292.875334ms" to run "out/minikube-darwin-arm64 -p functional-656000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.105.4:30684
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.105.4:30684
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1313: Took "121.649292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1327: Took "34.600875ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1364: Took "127.15325ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1377: Took "32.66825ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686415946500997000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686415946500997000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686415946500997000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001/test-1686415946500997000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (52.123959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_bb303e1da6581176b9026bc6876d8b48e49704e8_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 16:52 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 16:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 16:52 test-1686415946500997000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh cat /mount-9p/test-1686415946500997000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-656000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a585885c-cda2-415c-a9b1-15dd7b9f75ba] Pending
helpers_test.go:344: "busybox-mount" [a585885c-cda2-415c-a9b1-15dd7b9f75ba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a585885c-cda2-415c-a9b1-15dd7b9f75ba] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a585885c-cda2-415c-a9b1-15dd7b9f75ba] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006993583s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-656000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port423916164/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3674542565/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (73.543667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3674542565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh "sudo umount -f /mount-9p": exit status 1 (73.318334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-656000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3674542565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T" /mount1: exit status 1 (83.761709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-656000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-656000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3585230942/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-656000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-656000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-656000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-656000 image ls --format short --alsologtostderr:
I0610 09:52:52.520108    3139 out.go:296] Setting OutFile to fd 1 ...
I0610 09:52:52.520797    3139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.520800    3139 out.go:309] Setting ErrFile to fd 2...
I0610 09:52:52.520803    3139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.520873    3139 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 09:52:52.521274    3139 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.521334    3139 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.522162    3139 ssh_runner.go:195] Run: systemctl --version
I0610 09:52:52.522178    3139 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/id_rsa Username:docker}
I0610 09:52:52.562509    3139 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-656000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-656000 | 7db8912bbf1f9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.27.2           | 72c9df6be7f1b | 115MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | 2ee705380c3c5 | 107MB  |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/nginx                     | latest            | c42efe0b54387 | 135MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-656000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | 5ee47dcca7543 | 41MB   |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 305d7ed1dae28 | 56.2MB |
| registry.k8s.io/kube-proxy                  | v1.27.2           | 29921a0845422 | 66.5MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-656000 image ls --format table --alsologtostderr:
I0610 09:52:52.716583    3148 out.go:296] Setting OutFile to fd 1 ...
I0610 09:52:52.716719    3148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.716722    3148 out.go:309] Setting ErrFile to fd 2...
I0610 09:52:52.716724    3148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.716794    3148 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 09:52:52.717162    3148 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.717223    3148 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.717989    3148 ssh_runner.go:195] Run: systemctl --version
I0610 09:52:52.717999    3148 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/id_rsa Username:docker}
I0610 09:52:52.759035    3148 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-656000 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-656000"],"size":"32900000"},{"id":"7db8912bbf1f99aca0448421312d71ac532ef0a3bb333faf614522fb0dfc5017","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-656000"],"size":"30"},{"id":"5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"4100
0000"},{"id":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"115000000"},{"id":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"66500000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"135000000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[
],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"56200000"},{"id":"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"107000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm
:1.8"],"size":"85000000"}]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-656000 image ls --format json --alsologtostderr:
I0610 09:52:52.630551    3144 out.go:296] Setting OutFile to fd 1 ...
I0610 09:52:52.630735    3144 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.630739    3144 out.go:309] Setting ErrFile to fd 2...
I0610 09:52:52.630741    3144 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.630818    3144 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 09:52:52.631286    3144 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.631345    3144 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.632132    3144 ssh_runner.go:195] Run: systemctl --version
I0610 09:52:52.632143    3144 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/id_rsa Username:docker}
I0610 09:52:52.670676    3144 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-656000 ssh pgrep buildkitd: exit status 1 (77.496041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image build -t localhost/my-image:functional-656000 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 image build -t localhost/my-image:functional-656000 testdata/build --alsologtostderr: (2.742707375s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-656000 image build -t localhost/my-image:functional-656000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in b45d5259dc7b
Removing intermediate container b45d5259dc7b
---> 1eff641ecc8a
Step 3/3 : ADD content.txt /
---> 7b038b256acd
Successfully built 7b038b256acd
Successfully tagged localhost/my-image:functional-656000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-656000 image build -t localhost/my-image:functional-656000 testdata/build --alsologtostderr:
I0610 09:52:52.638028    3145 out.go:296] Setting OutFile to fd 1 ...
I0610 09:52:52.638220    3145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.638224    3145 out.go:309] Setting ErrFile to fd 2...
I0610 09:52:52.638226    3145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 09:52:52.638309    3145 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16578-1150/.minikube/bin
I0610 09:52:52.638736    3145 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.639460    3145 config.go:182] Loaded profile config "functional-656000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 09:52:52.640219    3145 ssh_runner.go:195] Run: systemctl --version
I0610 09:52:52.640230    3145 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16578-1150/.minikube/machines/functional-656000/id_rsa Username:docker}
I0610 09:52:52.683300    3145 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2075903740.tar
I0610 09:52:52.683345    3145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 09:52:52.686430    3145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2075903740.tar
I0610 09:52:52.687796    3145 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2075903740.tar: stat -c "%s %y" /var/lib/minikube/build/build.2075903740.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2075903740.tar': No such file or directory
I0610 09:52:52.687819    3145 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2075903740.tar --> /var/lib/minikube/build/build.2075903740.tar (3072 bytes)
I0610 09:52:52.695196    3145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2075903740
I0610 09:52:52.698245    3145 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2075903740 -xf /var/lib/minikube/build/build.2075903740.tar
I0610 09:52:52.701737    3145 docker.go:336] Building image: /var/lib/minikube/build/build.2075903740
I0610 09:52:52.701785    3145 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-656000 /var/lib/minikube/build/build.2075903740
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0610 09:52:55.336561    3145 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-656000 /var/lib/minikube/build/build.2075903740: (2.6348035s)
I0610 09:52:55.336623    3145 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2075903740
I0610 09:52:55.339719    3145 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2075903740.tar
I0610 09:52:55.342240    3145 build_images.go:207] Built localhost/my-image:functional-656000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2075903740.tar
I0610 09:52:55.342253    3145 build_images.go:123] succeeded building to: functional-656000
I0610 09:52:55.342255    3145 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.492439875s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-656000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr
2023/06/10 09:52:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:353: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr: (2.085498292s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-656000 docker-env) && out/minikube-darwin-arm64 status -p functional-656000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-656000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr: (1.531707334s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.450040833s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-656000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 image load --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr: (1.847395834s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image save gcr.io/google-containers/addon-resizer:functional-656000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image rm gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-656000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p functional-656000 image save --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-darwin-arm64 -p functional-656000 image save --daemon gcr.io/google-containers/addon-resizer:functional-656000 --alsologtostderr: (1.501947667s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-656000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-656000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-656000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-656000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-179000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-179000 --driver=qemu2 : (30.183691792s)
--- PASS: TestImageBuild/serial/Setup (30.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-179000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-179000: (2.223429708s)
--- PASS: TestImageBuild/serial/NormalBuild (2.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-179000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-179000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-659000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0610 09:53:39.666319    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.673528    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.683628    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.703824    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.745183    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.827426    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:39.989539    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:40.310427    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:40.951430    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:42.231870    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:44.793546    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:53:49.915930    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:54:00.157955    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
E0610 09:54:20.638339    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-659000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m25.999011958s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (86.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons enable ingress --alsologtostderr -v=5
E0610 09:55:01.599916    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons enable ingress --alsologtostderr -v=5: (13.818241708s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-659000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-769000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0610 09:56:23.520911    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/addons-098000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-769000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (45.703147916s)
--- PASS: TestJSONOutput/start/Command (45.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-769000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-769000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-769000 --output=json --user=testUser
E0610 09:56:54.753201    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:54.759615    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:54.771804    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:54.793880    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:54.835971    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:54.916329    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:55.078439    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:55.400643    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:56.042857    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:57.325257    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
E0610 09:56:59.887653    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-769000 --output=json --user=testUser: (12.0820955s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-080000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-080000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.782459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6255a42-fe97-4b8b-9386-3b344978a6b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-080000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d41e2d3-85e9-4451-827f-62f436de33e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16578"}}
	{"specversion":"1.0","id":"f084103f-c94a-4252-bcd0-a9aa67a8a0be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig"}}
	{"specversion":"1.0","id":"221d61e7-61a0-4302-9fb4-6a4b67ead397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bf176fe3-4afa-4f40-9e0f-adcaba2b9634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"84b8b3fb-ad67-4a22-8161-eb44db11164c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube"}}
	{"specversion":"1.0","id":"07053238-7fde-4e1f-8b03-7f7f1d7b0fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5221f413-4fd4-4879-ad77-609f0b105afa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-080000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-080000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (18.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-419000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0610 09:57:35.734414    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/functional-656000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-1-419000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : (17.382419917s)
--- PASS: TestMountStart/serial/StartWithMountFirst (18.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-419000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-419000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-2-422000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-2-422000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 : (17.305635166s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-422000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-422000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.19s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.1s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 delete -p mount-start-1-419000 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-171000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0610 10:00:09.487146    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.493456    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.505509    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.527577    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.569642    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.651690    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:09.813762    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:10.135870    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:10.777133    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:12.059239    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:14.620701    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:19.742778    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:29.984873    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
E0610 10:00:50.465379    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-arm64 start -p multinode-171000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : (1m22.009367875s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-arm64 kubectl -p multinode-171000 -- rollout status deployment/busybox: (3.531534083s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-nc52j -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-ntgtl -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-nc52j -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-ntgtl -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-nc52j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-ntgtl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-nc52j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-nc52j -- sh -c "ping -c 1 192.168.105.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-ntgtl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-171000 -- exec busybox-67b7f59bb-ntgtl -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-171000 -v 3 --alsologtostderr
E0610 10:01:31.426909    1564 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16578-1150/.minikube/profiles/ingress-addon-legacy-659000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-arm64 node add -p multinode-171000 -v 3 --alsologtostderr: (35.6291255s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp testdata/cp-test.txt multinode-171000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile2998997279/001/cp-test_multinode-171000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000:/home/docker/cp-test.txt multinode-171000-m02:/home/docker/cp-test_multinode-171000_multinode-171000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test_multinode-171000_multinode-171000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000:/home/docker/cp-test.txt multinode-171000-m03:/home/docker/cp-test_multinode-171000_multinode-171000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test_multinode-171000_multinode-171000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp testdata/cp-test.txt multinode-171000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile2998997279/001/cp-test_multinode-171000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m02:/home/docker/cp-test.txt multinode-171000:/home/docker/cp-test_multinode-171000-m02_multinode-171000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test_multinode-171000-m02_multinode-171000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m02:/home/docker/cp-test.txt multinode-171000-m03:/home/docker/cp-test_multinode-171000-m02_multinode-171000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test_multinode-171000-m02_multinode-171000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp testdata/cp-test.txt multinode-171000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile2998997279/001/cp-test_multinode-171000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m03:/home/docker/cp-test.txt multinode-171000:/home/docker/cp-test_multinode-171000-m03_multinode-171000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000 "sudo cat /home/docker/cp-test_multinode-171000-m03_multinode-171000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 cp multinode-171000-m03:/home/docker/cp-test.txt multinode-171000-m02:/home/docker/cp-test_multinode-171000-m03_multinode-171000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-171000 ssh -n multinode-171000-m02 "sudo cat /home/docker/cp-test_multinode-171000-m03_multinode-171000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-009000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.772125ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-009000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16578
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16578-1150/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16578-1150/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.390791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-009000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-009000
--- PASS: TestNoKubernetes/serial/Stop (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-009000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.365333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-009000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-737000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-737000 -n old-k8s-version-737000: exit status 7 (30.03925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-737000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-133000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-133000 -n no-preload-133000: exit status 7 (29.112625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-133000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-315000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-315000 -n embed-certs-315000: exit status 7 (28.627625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-315000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-680000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-680000 -n default-k8s-diff-port-680000: exit status 7 (27.960458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-680000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-785000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-785000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-785000 -n newest-cni-785000: exit status 7 (29.654917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-785000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/258)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-472000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-472000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-472000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-472000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472000"

                                                
                                                
----------------------- debugLogs end: cilium-472000 [took: 2.164295333s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-472000
--- SKIP: TestNetworkPlugins/group/cilium (2.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-177000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-177000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard