Test Report: QEMU_macOS 16573

                    
                      2f0304e5caeb910cf6b713a3408f4279364136e7:2023-05-24:29404
                    
                

Test fail (93/253)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.98
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.88
24 TestAddons/parallel/Registry 720.83
25 TestAddons/parallel/Ingress 0.79
26 TestAddons/parallel/InspektorGadget 480.89
27 TestAddons/parallel/MetricsServer 721.01
30 TestAddons/parallel/CSI 670.89
32 TestAddons/parallel/CloudSpanner 832.26
33 TestAddons/serial 0
34 TestAddons/StoppedEnableDisable 0
35 TestCertOptions 10.13
36 TestCertExpiration 195.12
37 TestDockerFlags 10.03
38 TestForceSystemdFlag 12.1
39 TestForceSystemdEnv 9.96
82 TestFunctional/parallel/ServiceCmdConnect 32.87
149 TestImageBuild/serial/BuildWithBuildArg 1.13
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 55.87
193 TestMountStart/serial/StartWithMountFirst 10.31
196 TestMultiNode/serial/FreshStart2Nodes 9.81
197 TestMultiNode/serial/DeployApp2Nodes 102.31
198 TestMultiNode/serial/PingHostFrom2Pods 0.08
199 TestMultiNode/serial/AddNode 0.07
200 TestMultiNode/serial/ProfileList 0.11
201 TestMultiNode/serial/CopyFile 0.06
202 TestMultiNode/serial/StopNode 0.13
203 TestMultiNode/serial/StartAfterStop 0.1
204 TestMultiNode/serial/RestartKeepsNodes 5.36
205 TestMultiNode/serial/DeleteNode 0.1
206 TestMultiNode/serial/StopMultiNode 0.14
207 TestMultiNode/serial/RestartMultiNode 5.24
208 TestMultiNode/serial/ValidateNameConflict 19.78
212 TestPreload 9.88
214 TestScheduledStopUnix 10.05
215 TestSkaffold 12.6
218 TestRunningBinaryUpgrade 164.62
220 TestKubernetesUpgrade 15.21
233 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.42
234 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.14
235 TestStoppedBinaryUpgrade/Setup 136.96
237 TestPause/serial/Start 9.9
247 TestNoKubernetes/serial/StartWithK8s 9.94
248 TestNoKubernetes/serial/StartWithStopK8s 5.32
249 TestNoKubernetes/serial/Start 5.29
253 TestNoKubernetes/serial/StartNoArgs 5.32
255 TestNetworkPlugins/group/auto/Start 9.7
256 TestNetworkPlugins/group/calico/Start 9.73
257 TestNetworkPlugins/group/custom-flannel/Start 9.82
258 TestNetworkPlugins/group/false/Start 10.17
259 TestNetworkPlugins/group/kindnet/Start 9.67
260 TestNetworkPlugins/group/flannel/Start 9.75
261 TestNetworkPlugins/group/enable-default-cni/Start 9.6
262 TestNetworkPlugins/group/bridge/Start 9.71
263 TestNetworkPlugins/group/kubenet/Start 9.81
264 TestStoppedBinaryUpgrade/Upgrade 2.34
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
267 TestStartStop/group/old-k8s-version/serial/FirstStart 11.71
269 TestStartStop/group/no-preload/serial/FirstStart 9.91
270 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
274 TestStartStop/group/old-k8s-version/serial/SecondStart 7.04
275 TestStartStop/group/no-preload/serial/DeployApp 0.08
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
279 TestStartStop/group/no-preload/serial/SecondStart 5.19
280 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
283 TestStartStop/group/old-k8s-version/serial/Pause 0.1
284 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
285 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
286 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
287 TestStartStop/group/no-preload/serial/Pause 0.11
289 TestStartStop/group/embed-certs/serial/FirstStart 9.8
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.31
292 TestStartStop/group/embed-certs/serial/DeployApp 0.1
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/embed-certs/serial/SecondStart 6.99
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.08
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.19
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/embed-certs/serial/Pause 0.1
306 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
311 TestStartStop/group/newest-cni/serial/FirstStart 9.94
316 TestStartStop/group/newest-cni/serial/SecondStart 5.24
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (14.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-108000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-108000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.976524458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7950dd0e-3a0b-406e-ab0d-203058d294d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-108000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e90f293-c143-4b9e-9667-a6742dd22967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16573"}}
	{"specversion":"1.0","id":"0b23c870-22ba-4683-b7c7-e98ba485e196","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig"}}
	{"specversion":"1.0","id":"16c8e8cd-8601-4ca3-8572-1cb6e93a6b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"fb2c719a-71df-4c31-af5b-1c592d544642","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"872f2653-75e3-4878-886b-e51e326172ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube"}}
	{"specversion":"1.0","id":"661d21d7-559a-4199-8de4-3ce4cc161a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"90b87169-af33-48ad-9b86-ff90022cb6f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3721baaa-8afd-450b-bd78-0d0767828b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c3ada96d-b136-4b5c-a934-46a53ead2c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd5a74bf-0e81-4f88-ae90-9d098eca629b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-108000 in cluster download-only-108000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c933bdd6-2dba-45f0-a735-d7375a1e9164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"736c26ef-b4ed-4a3f-a814-4ad5d80aec23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378] Decompressors:map[bz2:0x1400049a838 gz:0x1400049a890 tar:0x1400049a840 tar.bz2:0x1400049a850 tar.gz:0x1400049a860 tar.xz:0x1400049a870 tar.zst:0x1400049a880 tbz2:0x1400049a850 tgz:0x140004
9a860 txz:0x1400049a870 tzst:0x1400049a880 xz:0x1400049a898 zip:0x1400049a8a0 zst:0x1400049a8b0] Getters:map[file:0x14000ab97c0 http:0x14000a22aa0 https:0x14000a22af0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"3f56dc0b-7395-4645-9420-f60b13bd1e40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 11:35:43.247013    1456 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:35:43.247159    1456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:43.247162    1456 out.go:309] Setting ErrFile to fd 2...
	I0524 11:35:43.247165    1456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:43.247229    1456 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	W0524 11:35:43.247359    1456 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: no such file or directory
	I0524 11:35:43.248572    1456 out.go:303] Setting JSON to true
	I0524 11:35:43.265684    1456 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":314,"bootTime":1684953029,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:35:43.265738    1456 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:35:43.269580    1456 out.go:97] [download-only-108000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:35:43.273607    1456 out.go:169] MINIKUBE_LOCATION=16573
	I0524 11:35:43.269733    1456 notify.go:220] Checking for updates...
	W0524 11:35:43.269768    1456 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball: no such file or directory
	I0524 11:35:43.278486    1456 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:35:43.281606    1456 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:35:43.283036    1456 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:35:43.286482    1456 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	W0524 11:35:43.292553    1456 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 11:35:43.292745    1456 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:35:43.297497    1456 out.go:97] Using the qemu2 driver based on user configuration
	I0524 11:35:43.297517    1456 start.go:295] selected driver: qemu2
	I0524 11:35:43.297532    1456 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:35:43.297588    1456 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:35:43.301532    1456 out.go:169] Automatically selected the socket_vmnet network
	I0524 11:35:43.307076    1456 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0524 11:35:43.307213    1456 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 11:35:43.307234    1456 cni.go:84] Creating CNI manager for ""
	I0524 11:35:43.307257    1456 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 11:35:43.307261    1456 start_flags.go:319] config:
	{Name:download-only-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-108000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:35:43.307399    1456 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:35:43.311547    1456 out.go:97] Downloading VM boot image ...
	I0524 11:35:43.311585    1456 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso
	I0524 11:35:50.541475    1456 out.go:97] Starting control plane node download-only-108000 in cluster download-only-108000
	I0524 11:35:50.541501    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:50.594774    1456 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 11:35:50.594832    1456 cache.go:57] Caching tarball of preloaded images
	I0524 11:35:50.594990    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:50.599399    1456 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0524 11:35:50.599405    1456 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:50.675234    1456 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 11:35:57.182614    1456 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:57.182748    1456 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:57.826801    1456 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0524 11:35:57.826976    1456 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/download-only-108000/config.json ...
	I0524 11:35:57.827004    1456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/download-only-108000/config.json: {Name:mkb01c988bf51437b0ec4fd4bf88d2090d77f626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:35:57.827258    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:57.827437    1456 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0524 11:35:58.153748    1456 out.go:169] 
	W0524 11:35:58.158931    1456 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378] Decompressors:map[bz2:0x1400049a838 gz:0x1400049a890 tar:0x1400049a840 tar.bz2:0x1400049a850 tar.gz:0x1400049a860 tar.xz:0x1400049a870 tar.zst:0x1400049a880 tbz2:0x1400049a850 tgz:0x1400049a860 txz:0x1400049a870 tzst:0x1400049a880 xz:0x1400049a898 zip:0x1400049a8a0 zst:0x1400049a8b0] Getters:map[file:0x14000ab97c0 http:0x14000a22aa0 https:0x14000a22af0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0524 11:35:58.158961    1456 out_reason.go:110] 
	W0524 11:35:58.166752    1456 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 11:35:58.169797    1456 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-108000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (14.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.74738475s)

                                                
                                                
-- stdout --
	* [offline-docker-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-088000 in cluster offline-docker-088000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:28:51.163174    3779 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:28:51.163318    3779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:28:51.163321    3779 out.go:309] Setting ErrFile to fd 2...
	I0524 12:28:51.163323    3779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:28:51.163402    3779 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:28:51.164435    3779 out.go:303] Setting JSON to false
	I0524 12:28:51.181138    3779 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3502,"bootTime":1684953029,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:28:51.181242    3779 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:28:51.185287    3779 out.go:177] * [offline-docker-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:28:51.192124    3779 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:28:51.192142    3779 notify.go:220] Checking for updates...
	I0524 12:28:51.195213    3779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:28:51.199134    3779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:28:51.202184    3779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:28:51.205208    3779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:28:51.208358    3779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:28:51.211434    3779 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:28:51.211463    3779 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:28:51.215153    3779 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:28:51.221054    3779 start.go:295] selected driver: qemu2
	I0524 12:28:51.221059    3779 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:28:51.221065    3779 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:28:51.222985    3779 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:28:51.226102    3779 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:28:51.229225    3779 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:28:51.229244    3779 cni.go:84] Creating CNI manager for ""
	I0524 12:28:51.229252    3779 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:28:51.229257    3779 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:28:51.229264    3779 start_flags.go:319] config:
	{Name:offline-docker-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-088000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:28:51.229350    3779 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:51.236175    3779 out.go:177] * Starting control plane node offline-docker-088000 in cluster offline-docker-088000
	I0524 12:28:51.240084    3779 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:28:51.240117    3779 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:28:51.240129    3779 cache.go:57] Caching tarball of preloaded images
	I0524 12:28:51.240195    3779 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:28:51.240202    3779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:28:51.240259    3779 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/offline-docker-088000/config.json ...
	I0524 12:28:51.240275    3779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/offline-docker-088000/config.json: {Name:mke0371ff1fb606e40b522204065b04509dd5bed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:28:51.240475    3779 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:28:51.240487    3779 start.go:364] acquiring machines lock for offline-docker-088000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:28:51.240509    3779 start.go:368] acquired machines lock for "offline-docker-088000" in 18.291µs
	I0524 12:28:51.240521    3779 start.go:93] Provisioning new machine with config: &{Name:offline-docker-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-088000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:28:51.240547    3779 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:28:51.249203    3779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:28:51.263545    3779 start.go:159] libmachine.API.Create for "offline-docker-088000" (driver="qemu2")
	I0524 12:28:51.263578    3779 client.go:168] LocalClient.Create starting
	I0524 12:28:51.263644    3779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:28:51.263665    3779 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:51.263677    3779 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:51.263732    3779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:28:51.263746    3779 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:51.263753    3779 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:51.264071    3779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:28:51.391544    3779 main.go:141] libmachine: Creating SSH key...
	I0524 12:28:51.542605    3779 main.go:141] libmachine: Creating Disk image...
	I0524 12:28:51.542614    3779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:28:51.542818    3779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:51.558842    3779 main.go:141] libmachine: STDOUT: 
	I0524 12:28:51.558859    3779 main.go:141] libmachine: STDERR: 
	I0524 12:28:51.558941    3779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2 +20000M
	I0524 12:28:51.566722    3779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:28:51.566741    3779 main.go:141] libmachine: STDERR: 
	I0524 12:28:51.566765    3779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:51.566775    3779 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:28:51.566821    3779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c1:ca:dc:62:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:51.568498    3779 main.go:141] libmachine: STDOUT: 
	I0524 12:28:51.568513    3779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:28:51.568532    3779 client.go:171] LocalClient.Create took 304.951792ms
	I0524 12:28:53.569858    3779 start.go:128] duration metric: createHost completed in 2.329324667s
	I0524 12:28:53.569877    3779 start.go:83] releasing machines lock for "offline-docker-088000", held for 2.329388208s
	W0524 12:28:53.569891    3779 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:28:53.582774    3779 out.go:177] * Deleting "offline-docker-088000" in qemu2 ...
	W0524 12:28:53.593125    3779 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:28:53.593133    3779 start.go:702] Will try again in 5 seconds ...
	I0524 12:28:58.595271    3779 start.go:364] acquiring machines lock for offline-docker-088000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:28:58.595730    3779 start.go:368] acquired machines lock for "offline-docker-088000" in 374.625µs
	I0524 12:28:58.595831    3779 start.go:93] Provisioning new machine with config: &{Name:offline-docker-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-088000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:28:58.596172    3779 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:28:58.603971    3779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:28:58.650651    3779 start.go:159] libmachine.API.Create for "offline-docker-088000" (driver="qemu2")
	I0524 12:28:58.650711    3779 client.go:168] LocalClient.Create starting
	I0524 12:28:58.650963    3779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:28:58.651034    3779 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:58.651055    3779 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:58.651145    3779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:28:58.651175    3779 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:58.651188    3779 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:58.651713    3779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:28:58.775971    3779 main.go:141] libmachine: Creating SSH key...
	I0524 12:28:58.826245    3779 main.go:141] libmachine: Creating Disk image...
	I0524 12:28:58.826251    3779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:28:58.826409    3779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:58.834884    3779 main.go:141] libmachine: STDOUT: 
	I0524 12:28:58.834899    3779 main.go:141] libmachine: STDERR: 
	I0524 12:28:58.834959    3779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2 +20000M
	I0524 12:28:58.842068    3779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:28:58.842082    3779 main.go:141] libmachine: STDERR: 
	I0524 12:28:58.842097    3779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:58.842108    3779 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:28:58.842151    3779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:3f:7b:cc:cd:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/offline-docker-088000/disk.qcow2
	I0524 12:28:58.843647    3779 main.go:141] libmachine: STDOUT: 
	I0524 12:28:58.843660    3779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:28:58.843673    3779 client.go:171] LocalClient.Create took 192.956667ms
	I0524 12:29:00.845723    3779 start.go:128] duration metric: createHost completed in 2.249528333s
	I0524 12:29:00.845755    3779 start.go:83] releasing machines lock for "offline-docker-088000", held for 2.250031s
	W0524 12:29:00.845910    3779 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:00.856304    3779 out.go:177] 
	W0524 12:29:00.860420    3779 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:29:00.860437    3779 out.go:239] * 
	* 
	W0524 12:29:00.860909    3779 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:29:00.871196    3779 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-088000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-05-24 12:29:00.883283 -0700 PDT m=+3197.752424709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-088000 -n offline-docker-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-088000 -n offline-docker-088000: exit status 7 (32.093125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-088000
--- FAIL: TestOffline (9.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001565583s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
addons_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000: exit status 7 (55.567583ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0524 11:54:50.650364    1802 status.go:249] status error: host: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:308: status error: exit status 7 (may be ok)
addons_test.go:308: "addons-514000" apiserver is not running, skipping kubectl commands (state="Nonexistent")
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 18:54:50 UTC. --
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516120296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516129229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.568789730Z" level=info msg="ignoring event" container=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569050551Z" level=info msg="shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569107330Z" level=warning msg="cleaning up after shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569117420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607638824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607702137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607716942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607727942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f47df037d99569ad6cd8f4ef2c3926ab0aed2bb5b85f513c520fc0abc42c67f3/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              17 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             17 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             17 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             18 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             18 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             18 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             18 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             18 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 18:54:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-514000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  18:54:51 up 18 min,  0 users,  load average: 0.50, 0.63, 0.47
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:36:51.381613       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.386892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.387012       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.392664       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.393166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.395637       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.395722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.400840       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.401085       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:57.593427       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.98.17.234]
	I0524 18:36:57.610279       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0524 18:41:34.543984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:41:34.544093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.544191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.544366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.554262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.554305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.559325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.559355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.542946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.543557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.550014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.550133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 18:37:01.505510       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:01.519013       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.495937       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.582819       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.506916       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.513803       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:03.593747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.596306       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598360       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598521       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:03.685792       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.524940       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.535090       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.540969       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.541239       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:04.555353       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:20.560907       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0524 18:37:20.561333       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0524 18:37:20.662721       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 18:37:20.895710       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0524 18:37:20.999329       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 18:37:33.024397       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:33.041354       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:34.012720       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:34.026381       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 18:54:51 UTC. --
	May 24 18:49:37 addons-514000 kubelet[2266]: E0524 18:49:37.216874    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:49:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:49:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:49:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:50:37 addons-514000 kubelet[2266]: E0524 18:50:37.213172    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:50:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:50:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:50:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:51:37 addons-514000 kubelet[2266]: E0524 18:51:37.209933    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:51:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:51:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:51:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:51:37 addons-514000 kubelet[2266]: W0524 18:51:37.212802    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 18:52:37 addons-514000 kubelet[2266]: E0524 18:52:37.222293    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:52:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:52:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:52:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:53:37 addons-514000 kubelet[2266]: E0524 18:53:37.219445    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:53:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:53:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:53:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:54:37 addons-514000 kubelet[2266]: E0524 18:54:37.209773    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:54:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:54:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:54:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.83s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-514000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-514000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (36.573666ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:04:44 UTC. --
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516120296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516129229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.568789730Z" level=info msg="ignoring event" container=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569050551Z" level=info msg="shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569107330Z" level=warning msg="cleaning up after shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569117420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607638824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607702137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607716942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607727942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f47df037d99569ad6cd8f4ef2c3926ab0aed2bb5b85f513c520fc0abc42c67f3/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              27 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             27 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             27 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             27 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             28 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             28 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             28 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             28 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     27m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           27m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-514000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	{"level":"info","ts":"2023-05-24T18:56:33.899Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1444,"took":"1.805765ms","hash":2332186912}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2332186912,"revision":1444,"compact-revision":1145}
	{"level":"info","ts":"2023-05-24T19:01:33.910Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1744}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1744,"took":"1.711146ms","hash":1851239145}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1851239145,"revision":1744,"compact-revision":1444}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  19:04:44 up 28 min,  0 users,  load average: 0.26, 0.48, 0.48
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:36:57.610279       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0524 18:41:34.543984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:41:34.544093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.544191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.544366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.554262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.554305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.559325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.559355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.542946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.543557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.550014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.550133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.536214       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.536373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.548003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.548106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537554       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.538264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.538305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 18:37:01.505510       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:01.519013       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.495937       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.582819       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.506916       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.513803       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:03.593747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.596306       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598360       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598521       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:03.685792       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.524940       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.535090       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.540969       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.541239       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:04.555353       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:20.560907       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0524 18:37:20.561333       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0524 18:37:20.662721       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 18:37:20.895710       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0524 18:37:20.999329       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 18:37:33.024397       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:33.041354       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:34.012720       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:34.026381       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:04:44 UTC. --
	May 24 18:59:37 addons-514000 kubelet[2266]: E0524 18:59:37.219146    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:59:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:59:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:59:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:00:37 addons-514000 kubelet[2266]: E0524 19:00:37.312409    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:00:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:00:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:00:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:01:37 addons-514000 kubelet[2266]: E0524 19:01:37.211178    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:01:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:01:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:01:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:01:37 addons-514000 kubelet[2266]: W0524 19:01:37.216820    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 19:02:37 addons-514000 kubelet[2266]: E0524 19:02:37.212957    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:02:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:02:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:02:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:03:37 addons-514000 kubelet[2266]: E0524 19:03:37.209339    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:03:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:03:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:03:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:04:37 addons-514000 kubelet[2266]: E0524 19:04:37.208780    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:04:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:04:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:04:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-05-24 12:04:42.975461 -0700 PDT m=+1739.826957292
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:04:43 UTC. --
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516120296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516129229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.568789730Z" level=info msg="ignoring event" container=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569050551Z" level=info msg="shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569107330Z" level=warning msg="cleaning up after shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569117420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607638824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607702137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607716942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607727942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f47df037d99569ad6cd8f4ef2c3926ab0aed2bb5b85f513c520fc0abc42c67f3/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              27 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             27 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             27 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             27 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             28 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             28 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             28 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             28 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:02:41 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     27m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           27m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-514000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	{"level":"info","ts":"2023-05-24T18:56:33.899Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1444,"took":"1.805765ms","hash":2332186912}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2332186912,"revision":1444,"compact-revision":1145}
	{"level":"info","ts":"2023-05-24T19:01:33.910Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1744}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1744,"took":"1.711146ms","hash":1851239145}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1851239145,"revision":1744,"compact-revision":1444}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  19:04:43 up 28 min,  0 users,  load average: 0.19, 0.47, 0.48
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:36:57.610279       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0524 18:41:34.543984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:41:34.544093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.544191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.544366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.554262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.554305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.559325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.559355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.542946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.543557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.550014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.550133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.536214       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.536373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.548003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.548106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537554       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.538264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.538305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 18:37:01.505510       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:01.519013       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.495937       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.582819       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.506916       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.513803       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:03.593747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.596306       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598360       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598521       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:03.685792       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.524940       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.535090       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.540969       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.541239       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:04.555353       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:20.560907       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0524 18:37:20.561333       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0524 18:37:20.662721       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 18:37:20.895710       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0524 18:37:20.999329       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 18:37:33.024397       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:33.041354       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:34.012720       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:34.026381       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:04:43 UTC. --
	May 24 18:59:37 addons-514000 kubelet[2266]: E0524 18:59:37.219146    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:59:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:59:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:59:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:00:37 addons-514000 kubelet[2266]: E0524 19:00:37.312409    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:00:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:00:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:00:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:01:37 addons-514000 kubelet[2266]: E0524 19:01:37.211178    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:01:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:01:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:01:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:01:37 addons-514000 kubelet[2266]: W0524 19:01:37.216820    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 19:02:37 addons-514000 kubelet[2266]: E0524 19:02:37.212957    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:02:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:02:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:02:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:03:37 addons-514000 kubelet[2266]: E0524 19:03:37.209339    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:03:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:03:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:03:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:04:37 addons-514000 kubelet[2266]: E0524 19:04:37.208780    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:04:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:04:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:04:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (721.01s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.00230875s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-05-24 12:06:51.555224 -0700 PDT m=+1868.407768001
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-514000        | jenkins | v1.30.1 | 24 May 23 12:04 PDT | 24 May 23 12:04 PDT |
	|         | -p addons-514000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:06:51 UTC. --
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202234365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202265406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202273323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202278489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 cri-dockerd[1138]: time="2023-05-24T19:04:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/324faf77934c8dd14cc1dcdd99d834dde8eed7155b1727c2ec6265fcc5463aac/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 19:04:45 addons-514000 dockerd[916]: time="2023-05-24T19:04:45.605001003Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	May 24 19:04:49 addons-514000 cri-dockerd[1138]: time="2023-05-24T19:04:49Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985023040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985362786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985376786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985381869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	372016bac2d66       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                     2 minutes ago       Running             headlamp                     0                   324faf77934c8
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              29 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   29 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   29 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             29 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             29 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             30 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             30 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             30 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             30 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             30 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:06:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:05:14 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:05:14 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:05:14 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:05:14 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  headlamp                    headlamp-6b5756787-kf2ww                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	{"level":"info","ts":"2023-05-24T18:56:33.899Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1444,"took":"1.805765ms","hash":2332186912}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2332186912,"revision":1444,"compact-revision":1145}
	{"level":"info","ts":"2023-05-24T19:01:33.910Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1744}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1744,"took":"1.711146ms","hash":1851239145}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1851239145,"revision":1744,"compact-revision":1444}
	{"level":"info","ts":"2023-05-24T19:06:33.919Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2043}
	{"level":"info","ts":"2023-05-24T19:06:33.922Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":2043,"took":"1.631648ms","hash":614110781}
	{"level":"info","ts":"2023-05-24T19:06:33.922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":614110781,"revision":2043,"compact-revision":1744}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:06:52 up 30 min,  0 users,  load average: 0.69, 0.56, 0.51
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:46:34.559325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.559355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.542946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.543557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.550014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.550133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.536214       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.536373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.548003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.548106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537554       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.538264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.538305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:04:44.806679       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.102.182.222]
	I0524 19:06:34.546285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.546419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:06:34.553782       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.554165       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:06:34.557667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.557813       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 18:37:02.495937       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.582819       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.506916       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.513803       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:03.593747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.596306       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598360       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598521       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:03.685792       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.524940       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.535090       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.540969       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.541239       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:04.555353       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:20.560907       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0524 18:37:20.561333       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0524 18:37:20.662721       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 18:37:20.895710       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0524 18:37:20.999329       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 18:37:33.024397       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:33.041354       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:34.012720       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:34.026381       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 19:04:44.816553       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-6b5756787 to 1"
	I0524 19:04:44.834671       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-6b5756787-kf2ww"
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:06:52 UTC. --
	May 24 19:03:37 addons-514000 kubelet[2266]: E0524 19:03:37.209339    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:03:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:03:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:03:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:04:37 addons-514000 kubelet[2266]: E0524 19:04:37.208780    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:04:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:04:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:04:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:04:44 addons-514000 kubelet[2266]: I0524 19:04:44.840065    2266 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:04:44 addons-514000 kubelet[2266]: E0524 19:04:44.840102    2266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8924bcf-105a-490c-8dee-b1e6ac2ae9dc" containerName="create"
	May 24 19:04:44 addons-514000 kubelet[2266]: E0524 19:04:44.840107    2266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4aa56f7c-19bb-453a-b200-e0f9135925dc" containerName="patch"
	May 24 19:04:44 addons-514000 kubelet[2266]: I0524 19:04:44.840120    2266 memory_manager.go:346] "RemoveStaleState removing state" podUID="a8924bcf-105a-490c-8dee-b1e6ac2ae9dc" containerName="create"
	May 24 19:04:44 addons-514000 kubelet[2266]: I0524 19:04:44.840123    2266 memory_manager.go:346] "RemoveStaleState removing state" podUID="4aa56f7c-19bb-453a-b200-e0f9135925dc" containerName="patch"
	May 24 19:04:44 addons-514000 kubelet[2266]: I0524 19:04:44.974513    2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4pl\" (UniqueName: \"kubernetes.io/projected/490c364a-3e83-4de2-89de-8ca7ba0dde32-kube-api-access-kd4pl\") pod \"headlamp-6b5756787-kf2ww\" (UID: \"490c364a-3e83-4de2-89de-8ca7ba0dde32\") " pod="headlamp/headlamp-6b5756787-kf2ww"
	May 24 19:04:44 addons-514000 kubelet[2266]: I0524 19:04:44.974596    2266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/490c364a-3e83-4de2-89de-8ca7ba0dde32-gcp-creds\") pod \"headlamp-6b5756787-kf2ww\" (UID: \"490c364a-3e83-4de2-89de-8ca7ba0dde32\") " pod="headlamp/headlamp-6b5756787-kf2ww"
	May 24 19:04:50 addons-514000 kubelet[2266]: I0524 19:04:50.872583    2266 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-6b5756787-kf2ww" podStartSLOduration=2.349694023 podCreationTimestamp="2023-05-24 19:04:44 +0000 UTC" firstStartedPulling="2023-05-24 19:04:45.370369124 +0000 UTC m=+1688.251162991" lastFinishedPulling="2023-05-24 19:04:49.893214539 +0000 UTC m=+1692.774008406" observedRunningTime="2023-05-24 19:04:50.858786302 +0000 UTC m=+1693.739580212" watchObservedRunningTime="2023-05-24 19:04:50.872539438 +0000 UTC m=+1693.753333348"
	May 24 19:05:37 addons-514000 kubelet[2266]: E0524 19:05:37.312943    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:05:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:05:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:05:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:06:37 addons-514000 kubelet[2266]: E0524 19:06:37.211882    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:06:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:06:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:06:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:06:37 addons-514000 kubelet[2266]: W0524 19:06:37.219278    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (721.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (670.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.002617875s
addons_test.go:540: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-514000        | jenkins | v1.30.1 | 24 May 23 12:04 PDT | 24 May 23 12:04 PDT |
	|         | -p addons-514000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:16:07 UTC. --
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202234365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202265406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202273323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:04:45 addons-514000 dockerd[922]: time="2023-05-24T19:04:45.202278489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:45 addons-514000 cri-dockerd[1138]: time="2023-05-24T19:04:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/324faf77934c8dd14cc1dcdd99d834dde8eed7155b1727c2ec6265fcc5463aac/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 19:04:45 addons-514000 dockerd[916]: time="2023-05-24T19:04:45.605001003Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	May 24 19:04:49 addons-514000 cri-dockerd[1138]: time="2023-05-24T19:04:49Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985023040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985362786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985376786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:04:49 addons-514000 dockerd[922]: time="2023-05-24T19:04:49.985381869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	372016bac2d66       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                     11 minutes ago      Running             headlamp                     0                   324faf77934c8
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              39 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             39 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             39 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             39 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             39 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             39 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             39 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             39 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:15:28 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:15:28 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:15:28 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:15:28 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  headlamp                    headlamp-6b5756787-kf2ww                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     39m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         39m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 39m   kube-proxy       
	  Normal  Starting                 39m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  39m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                39m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           39m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	{"level":"info","ts":"2023-05-24T18:56:33.899Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1444,"took":"1.805765ms","hash":2332186912}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2332186912,"revision":1444,"compact-revision":1145}
	{"level":"info","ts":"2023-05-24T19:01:33.910Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1744}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1744,"took":"1.711146ms","hash":1851239145}
	{"level":"info","ts":"2023-05-24T19:01:33.913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1851239145,"revision":1744,"compact-revision":1444}
	{"level":"info","ts":"2023-05-24T19:06:33.919Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2043}
	{"level":"info","ts":"2023-05-24T19:06:33.922Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":2043,"took":"1.631648ms","hash":614110781}
	{"level":"info","ts":"2023-05-24T19:06:33.922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":614110781,"revision":2043,"compact-revision":1744}
	{"level":"info","ts":"2023-05-24T19:11:33.930Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2379}
	{"level":"info","ts":"2023-05-24T19:11:33.934Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":2379,"took":"2.540043ms","hash":4182065991}
	{"level":"info","ts":"2023-05-24T19:11:33.934Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4182065991,"revision":2379,"compact-revision":2043}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	2023/05/24 19:04:44 Ready to marshal response ...
	2023/05/24 19:04:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:16:07 up 39 min,  0 users,  load average: 0.48, 0.47, 0.48
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.536214       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.536373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.548003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.548106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.537554       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.537601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:01:34.538264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:01:34.538305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:04:44.806679       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.102.182.222]
	I0524 19:06:34.546285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.546419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:06:34.553782       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.554165       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:06:34.557667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:06:34.557813       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:11:34.537394       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:11:34.537548       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:11:34.547604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:11:34.548155       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 19:11:34.562736       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 19:11:34.562812       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 19:13:05.489615       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:13:20.490200       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:13:20.490782       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:13:35.490929       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:13:35.491134       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:13:50.492051       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:13:50.492370       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:14:05.492682       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:14:05.493051       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:14:20.494547       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:14:20.495269       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:14:35.497020       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:14:35.497053       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:14:50.498179       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:14:50.498434       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:15:05.498834       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:15:05.499421       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:15:20.499516       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:15:20.499847       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:15:35.504070       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:15:35.504556       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:15:50.504257       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:15:50.504654       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0524 19:16:05.504724       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0524 19:16:05.504775       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 19:16:07 UTC. --
	May 24 19:10:37 addons-514000 kubelet[2266]: E0524 19:10:37.214658    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:10:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:10:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:10:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:11:37 addons-514000 kubelet[2266]: E0524 19:11:37.207012    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:11:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:11:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:11:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:11:37 addons-514000 kubelet[2266]: W0524 19:11:37.212355    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 19:12:37 addons-514000 kubelet[2266]: E0524 19:12:37.207212    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:12:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:12:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:12:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:13:37 addons-514000 kubelet[2266]: E0524 19:13:37.206808    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:13:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:13:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:13:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:14:37 addons-514000 kubelet[2266]: E0524 19:14:37.206826    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:14:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:14:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:14:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:15:37 addons-514000 kubelet[2266]: E0524 19:15:37.206985    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:15:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:15:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:15:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (670.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (832.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-05-24 11:54:50.710124 -0700 PDT m=+1147.556787334
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-514000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-514000: exit status 10 (1m51.379214708s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-514000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |                     |
	|         | -p download-only-108000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| delete  | -p download-only-108000        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | --download-only -p             | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT |                     |
	|         | binary-mirror-689000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49309         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-689000        | binary-mirror-689000 | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:36 PDT |
	| start   | -p addons-514000               | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:36 PDT | 24 May 23 11:42 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-514000        | jenkins | v1.30.1 | 24 May 23 11:54 PDT |                     |
	|         | addons-514000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:36:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:36:07.002339    1534 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:36:07.002453    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002456    1534 out.go:309] Setting ErrFile to fd 2...
	I0524 11:36:07.002459    1534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:36:07.002536    1534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 11:36:07.003586    1534 out.go:303] Setting JSON to false
	I0524 11:36:07.018861    1534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":338,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:36:07.018925    1534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:36:07.027769    1534 out.go:177] * [addons-514000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:36:07.031820    1534 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 11:36:07.031893    1534 notify.go:220] Checking for updates...
	I0524 11:36:07.038648    1534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:07.041871    1534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:36:07.045796    1534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:36:07.047102    1534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 11:36:07.049751    1534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 11:36:07.052962    1534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:36:07.056656    1534 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 11:36:07.063768    1534 start.go:295] selected driver: qemu2
	I0524 11:36:07.063774    1534 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:36:07.063780    1534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 11:36:07.066216    1534 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:36:07.068844    1534 out.go:177] * Automatically selected the socket_vmnet network
	I0524 11:36:07.072801    1534 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 11:36:07.072817    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:07.072825    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:07.072829    1534 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 11:36:07.072834    1534 start_flags.go:319] config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:07.072903    1534 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:36:07.080756    1534 out.go:177] * Starting control plane node addons-514000 in cluster addons-514000
	I0524 11:36:07.084763    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:07.084787    1534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:36:07.084798    1534 cache.go:57] Caching tarball of preloaded images
	I0524 11:36:07.084855    1534 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 11:36:07.084860    1534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 11:36:07.085026    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:07.085039    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json: {Name:mk030e94b16168c63405a9b01e247098a953bb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:07.085215    1534 cache.go:195] Successfully downloaded all kic artifacts
	I0524 11:36:07.085252    1534 start.go:364] acquiring machines lock for addons-514000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 11:36:07.085315    1534 start.go:368] acquired machines lock for "addons-514000" in 57.708µs
	I0524 11:36:07.085327    1534 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:07.085355    1534 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 11:36:07.093778    1534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 11:36:07.463575    1534 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0524 11:36:07.463635    1534 client.go:168] LocalClient.Create starting
	I0524 11:36:07.463808    1534 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 11:36:07.521208    1534 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 11:36:07.678481    1534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 11:36:08.060894    1534 main.go:141] libmachine: Creating SSH key...
	I0524 11:36:08.147520    1534 main.go:141] libmachine: Creating Disk image...
	I0524 11:36:08.147526    1534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 11:36:08.147754    1534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.231403    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.231426    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.231485    1534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0524 11:36:08.238737    1534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 11:36:08.238750    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.238766    1534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.238773    1534 main.go:141] libmachine: Starting QEMU VM...
	I0524 11:36:08.238817    1534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:73:48:f5:f9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/disk.qcow2
	I0524 11:36:08.309201    1534 main.go:141] libmachine: STDOUT: 
	I0524 11:36:08.309237    1534 main.go:141] libmachine: STDERR: 
	I0524 11:36:08.309242    1534 main.go:141] libmachine: Attempt 0
	I0524 11:36:08.309258    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:10.311441    1534 main.go:141] libmachine: Attempt 1
	I0524 11:36:10.311529    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:12.313222    1534 main.go:141] libmachine: Attempt 2
	I0524 11:36:12.313245    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:14.315294    1534 main.go:141] libmachine: Attempt 3
	I0524 11:36:14.315307    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:16.317343    1534 main.go:141] libmachine: Attempt 4
	I0524 11:36:16.317356    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:18.319398    1534 main.go:141] libmachine: Attempt 5
	I0524 11:36:18.319426    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321607    1534 main.go:141] libmachine: Attempt 6
	I0524 11:36:20.321690    1534 main.go:141] libmachine: Searching for a:73:48:f5:f9:b3 in /var/db/dhcpd_leases ...
	I0524 11:36:20.321979    1534 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0524 11:36:20.322073    1534 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 11:36:20.322118    1534 main.go:141] libmachine: Found match: a:73:48:f5:f9:b3
	I0524 11:36:20.322159    1534 main.go:141] libmachine: IP: 192.168.105.2
	I0524 11:36:20.322182    1534 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0524 11:36:22.345943    1534 machine.go:88] provisioning docker machine ...
	I0524 11:36:22.346010    1534 buildroot.go:166] provisioning hostname "addons-514000"
	I0524 11:36:22.346753    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.347771    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.347789    1534 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0524 11:36:22.440700    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0524 11:36:22.440862    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.441350    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.441366    1534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 11:36:22.513129    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 11:36:22.513148    1534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 11:36:22.513166    1534 buildroot.go:174] setting up certificates
	I0524 11:36:22.513196    1534 provision.go:83] configureAuth start
	I0524 11:36:22.513202    1534 provision.go:138] copyHostCerts
	I0524 11:36:22.513384    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 11:36:22.513907    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 11:36:22.514185    1534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 11:36:22.514351    1534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-514000]
	I0524 11:36:22.615592    1534 provision.go:172] copyRemoteCerts
	I0524 11:36:22.615660    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 11:36:22.615678    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:22.647614    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 11:36:22.654906    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 11:36:22.661956    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 11:36:22.668901    1534 provision.go:86] duration metric: configureAuth took 155.700959ms
	I0524 11:36:22.668909    1534 buildroot.go:189] setting minikube options for container-runtime
	I0524 11:36:22.669263    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:22.669315    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.669538    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.669543    1534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 11:36:22.728343    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 11:36:22.728351    1534 buildroot.go:70] root file system type: tmpfs
	I0524 11:36:22.728414    1534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 11:36:22.728455    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.728711    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.728749    1534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 11:36:22.797892    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 11:36:22.797940    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:22.798220    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:22.798231    1534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 11:36:23.149053    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 11:36:23.149067    1534 machine.go:91] provisioned docker machine in 803.097167ms
	I0524 11:36:23.149073    1534 client.go:171] LocalClient.Create took 15.685539208s
	I0524 11:36:23.149079    1534 start.go:167] duration metric: libmachine.API.Create for "addons-514000" took 15.685619292s
	I0524 11:36:23.149084    1534 start.go:300] post-start starting for "addons-514000" (driver="qemu2")
	I0524 11:36:23.149087    1534 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 11:36:23.149151    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 11:36:23.149161    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.182740    1534 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 11:36:23.184182    1534 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 11:36:23.184191    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 11:36:23.184263    1534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 11:36:23.184291    1534 start.go:303] post-start completed in 35.204125ms
	I0524 11:36:23.184667    1534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/config.json ...
	I0524 11:36:23.184838    1534 start.go:128] duration metric: createHost completed in 16.099587584s
	I0524 11:36:23.184860    1534 main.go:141] libmachine: Using SSH client type: native
	I0524 11:36:23.185079    1534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e286d0] 0x100e2b130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0524 11:36:23.185084    1534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 11:36:23.240206    1534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684953383.421013085
	
	I0524 11:36:23.240212    1534 fix.go:207] guest clock: 1684953383.421013085
	I0524 11:36:23.240216    1534 fix.go:220] Guest: 2023-05-24 11:36:23.421013085 -0700 PDT Remote: 2023-05-24 11:36:23.184841 -0700 PDT m=+16.200821626 (delta=236.172085ms)
	I0524 11:36:23.240228    1534 fix.go:191] guest clock delta is within tolerance: 236.172085ms
	I0524 11:36:23.240231    1534 start.go:83] releasing machines lock for "addons-514000", held for 16.155020041s
	I0524 11:36:23.240534    1534 ssh_runner.go:195] Run: cat /version.json
	I0524 11:36:23.240542    1534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 11:36:23.240552    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.240589    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:23.271294    1534 ssh_runner.go:195] Run: systemctl --version
	I0524 11:36:23.356274    1534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 11:36:23.358206    1534 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 11:36:23.358253    1534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 11:36:23.363251    1534 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 11:36:23.363272    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:23.363358    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:23.374219    1534 docker.go:633] Got preloaded images: 
	I0524 11:36:23.374227    1534 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 11:36:23.374272    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:23.377135    1534 ssh_runner.go:195] Run: which lz4
	I0524 11:36:23.378475    1534 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 11:36:23.379822    1534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 11:36:23.379833    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 11:36:24.715030    1534 docker.go:597] Took 1.336609 seconds to copy over tarball
	I0524 11:36:24.715105    1534 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 11:36:25.802869    1534 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.087750334s)
	I0524 11:36:25.802885    1534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 11:36:25.818539    1534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 11:36:25.821398    1534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 11:36:25.826757    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:25.912573    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:27.259007    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346426625s)
	I0524 11:36:27.259050    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.259161    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.264502    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 11:36:27.267902    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 11:36:27.271357    1534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.271387    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 11:36:27.274823    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.278019    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 11:36:27.280856    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 11:36:27.283904    1534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 11:36:27.287473    1534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 11:36:27.291108    1534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 11:36:27.294288    1534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 11:36:27.297250    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.376117    1534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 11:36:27.384917    1534 start.go:481] detecting cgroup driver to use...
	I0524 11:36:27.384994    1534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 11:36:27.390435    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.395426    1534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 11:36:27.402483    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 11:36:27.406870    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.411215    1534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 11:36:27.451530    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 11:36:27.456795    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 11:36:27.461922    1534 ssh_runner.go:195] Run: which cri-dockerd
	I0524 11:36:27.463049    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 11:36:27.465876    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 11:36:27.470660    1534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 11:36:27.538638    1534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 11:36:27.616092    1534 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 11:36:27.616109    1534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 11:36:27.621459    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:27.708405    1534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 11:36:28.851963    1534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.143548708s)
	I0524 11:36:28.852015    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:28.939002    1534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 11:36:29.020013    1534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 11:36:29.108812    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.187424    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 11:36:29.194801    1534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 11:36:29.274472    1534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 11:36:29.298400    1534 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 11:36:29.298499    1534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 11:36:29.300633    1534 start.go:549] Will wait 60s for crictl version
	I0524 11:36:29.300681    1534 ssh_runner.go:195] Run: which crictl
	I0524 11:36:29.302069    1534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 11:36:29.320125    1534 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 11:36:29.320196    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.329425    1534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 11:36:29.346012    1534 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 11:36:29.346159    1534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 11:36:29.347609    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.351578    1534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:36:29.351619    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.359168    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.359177    1534 docker.go:563] Images already preloaded, skipping extraction
	I0524 11:36:29.359234    1534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 11:36:29.366578    1534 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 11:36:29.366587    1534 cache_images.go:84] Images are preloaded, skipping loading
	I0524 11:36:29.366634    1534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 11:36:29.376722    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:29.376734    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:29.376743    1534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 11:36:29.376755    1534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 11:36:29.376831    1534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 11:36:29.376873    1534 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 11:36:29.376934    1534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 11:36:29.379950    1534 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 11:36:29.379980    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 11:36:29.383262    1534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 11:36:29.388298    1534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 11:36:29.393370    1534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0524 11:36:29.398040    1534 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0524 11:36:29.399441    1534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 11:36:29.403560    1534 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0524 11:36:29.403576    1534 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.403733    1534 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 11:36:29.494908    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt ...
	I0524 11:36:29.494916    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt: {Name:mkde13471093958a457d9307a0c213d7ba461177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495144    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key ...
	I0524 11:36:29.495147    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key: {Name:mk5b2a6f100829fa25412e4c96a6b4d9b186c9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.495264    1534 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 11:36:29.601357    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt ...
	I0524 11:36:29.601364    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt: {Name:mkc3f94501092c9c51cfa6d329a0a2c4cec184ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601593    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key ...
	I0524 11:36:29.601596    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key: {Name:mk7acf18000a82a656fee32bbd454a3c129dabde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.601733    1534 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key
	I0524 11:36:29.601741    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0524 11:36:29.653842    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt ...
	I0524 11:36:29.653845    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: {Name:mk3856cd37d1f07be2cc9902b19f9498b880112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654036    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key ...
	I0524 11:36:29.654040    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.key: {Name:mkbc8808085e1496dcb2b3e03156e443b7b7994b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.654176    1534 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969
	I0524 11:36:29.654188    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 11:36:29.724674    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 ...
	I0524 11:36:29.724678    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969: {Name:mk424188d0f28cb0aa520452bb8ec4583a153ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724815    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 ...
	I0524 11:36:29.724818    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969: {Name:mk98c3231c62717b32e2418cabd759d6ad5645ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.724926    1534 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt
	I0524 11:36:29.725147    1534 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key
	I0524 11:36:29.725241    1534 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key
	I0524 11:36:29.725256    1534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0524 11:36:29.842949    1534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0524 11:36:29.842953    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mk581c30062675e68aafc25cb79bfc8a62fd3e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843105    1534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key ...
	I0524 11:36:29.843110    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key: {Name:mk019f6bac347a368012a36cea939860ce210025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:29.843389    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 11:36:29.843593    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 11:36:29.843619    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 11:36:29.843756    1534 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 11:36:29.844302    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 11:36:29.851879    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 11:36:29.859249    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 11:36:29.866847    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 11:36:29.873646    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 11:36:29.880415    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 11:36:29.887466    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 11:36:29.894575    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 11:36:29.901581    1534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 11:36:29.908027    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 11:36:29.914140    1534 ssh_runner.go:195] Run: openssl version
	I0524 11:36:29.916182    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 11:36:29.919659    1534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921372    1534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.921394    1534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 11:36:29.923349    1534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 11:36:29.926902    1534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 11:36:29.928503    1534 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 11:36:29.928540    1534 kubeadm.go:404] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-514000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:36:29.928599    1534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 11:36:29.935998    1534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 11:36:29.939589    1534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 11:36:29.942818    1534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 11:36:29.945835    1534 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 11:36:29.945853    1534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 11:36:29.967889    1534 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 11:36:29.967941    1534 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 11:36:30.020294    1534 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 11:36:30.020350    1534 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 11:36:30.020400    1534 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 11:36:30.076237    1534 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 11:36:30.084415    1534 out.go:204]   - Generating certificates and keys ...
	I0524 11:36:30.084460    1534 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 11:36:30.084494    1534 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 11:36:30.272940    1534 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 11:36:30.453046    1534 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 11:36:30.580586    1534 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 11:36:30.639773    1534 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 11:36:30.738497    1534 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 11:36:30.738567    1534 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.858811    1534 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 11:36:30.858875    1534 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0524 11:36:30.935967    1534 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 11:36:30.967281    1534 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 11:36:31.073416    1534 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 11:36:31.073445    1534 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 11:36:31.335469    1534 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 11:36:31.530915    1534 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 11:36:31.573436    1534 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 11:36:31.637219    1534 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 11:36:31.645102    1534 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 11:36:31.645531    1534 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 11:36:31.645571    1534 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 11:36:31.737201    1534 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 11:36:31.741345    1534 out.go:204]   - Booting up control plane ...
	I0524 11:36:31.741390    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 11:36:31.741439    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 11:36:31.741469    1534 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 11:36:31.741512    1534 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 11:36:31.741595    1534 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 11:36:35.739695    1534 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002246 seconds
	I0524 11:36:35.739796    1534 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 11:36:35.750536    1534 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 11:36:36.270805    1534 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 11:36:36.271028    1534 kubeadm.go:322] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 11:36:36.776691    1534 kubeadm.go:322] [bootstrap-token] Using token: zlw52u.ca0agirmjwjpmd4f
	I0524 11:36:36.783931    1534 out.go:204]   - Configuring RBAC rules ...
	I0524 11:36:36.784005    1534 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 11:36:36.785227    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 11:36:36.791945    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 11:36:36.793322    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 11:36:36.794557    1534 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 11:36:36.795891    1534 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 11:36:36.802617    1534 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 11:36:36.956552    1534 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 11:36:37.187637    1534 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 11:36:37.187937    1534 kubeadm.go:322] 
	I0524 11:36:37.187967    1534 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 11:36:37.187973    1534 kubeadm.go:322] 
	I0524 11:36:37.188044    1534 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 11:36:37.188053    1534 kubeadm.go:322] 
	I0524 11:36:37.188069    1534 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 11:36:37.188099    1534 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 11:36:37.188128    1534 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 11:36:37.188133    1534 kubeadm.go:322] 
	I0524 11:36:37.188155    1534 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 11:36:37.188158    1534 kubeadm.go:322] 
	I0524 11:36:37.188189    1534 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 11:36:37.188193    1534 kubeadm.go:322] 
	I0524 11:36:37.188219    1534 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 11:36:37.188277    1534 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 11:36:37.188314    1534 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 11:36:37.188322    1534 kubeadm.go:322] 
	I0524 11:36:37.188361    1534 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 11:36:37.188399    1534 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 11:36:37.188411    1534 kubeadm.go:322] 
	I0524 11:36:37.188464    1534 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188516    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 11:36:37.188534    1534 kubeadm.go:322] 	--control-plane 
	I0524 11:36:37.188538    1534 kubeadm.go:322] 
	I0524 11:36:37.188580    1534 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 11:36:37.188584    1534 kubeadm.go:322] 
	I0524 11:36:37.188629    1534 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zlw52u.ca0agirmjwjpmd4f \
	I0524 11:36:37.188681    1534 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 11:36:37.188736    1534 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 11:36:37.188819    1534 kubeadm.go:322] W0524 18:36:30.200947    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188904    1534 kubeadm.go:322] W0524 18:36:31.916526    1295 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 11:36:37.188909    1534 cni.go:84] Creating CNI manager for ""
	I0524 11:36:37.188916    1534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:36:37.195686    1534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 11:36:37.199715    1534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 11:36:37.203087    1534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 11:36:37.208259    1534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 11:36:37.208303    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.208333    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=addons-514000 minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.258566    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:37.271047    1534 ops.go:34] apiserver oom_adj: -16
	I0524 11:36:37.796169    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.296162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:38.796257    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.295049    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:39.796244    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:40.796162    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.296458    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:41.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.296250    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:42.796323    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.296423    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:43.796432    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.296246    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:44.796149    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.296189    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:45.796183    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.296206    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:46.796370    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.296192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:47.796245    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.296219    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:48.796135    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.296201    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:49.796192    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.296070    1534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 11:36:50.332878    1534 kubeadm.go:1076] duration metric: took 13.124695208s to wait for elevateKubeSystemPrivileges.
	I0524 11:36:50.332892    1534 kubeadm.go:406] StartCluster complete in 20.404490625s
	I0524 11:36:50.332916    1534 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333079    1534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:36:50.333301    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:36:50.333499    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 11:36:50.333541    1534 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0524 11:36:50.333603    1534 addons.go:66] Setting ingress=true in profile "addons-514000"
	I0524 11:36:50.333609    1534 addons.go:66] Setting registry=true in profile "addons-514000"
	I0524 11:36:50.333611    1534 addons.go:228] Setting addon ingress=true in "addons-514000"
	I0524 11:36:50.333614    1534 addons.go:228] Setting addon registry=true in "addons-514000"
	I0524 11:36:50.333650    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333646    1534 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0524 11:36:50.333656    1534 addons.go:66] Setting storage-provisioner=true in profile "addons-514000"
	I0524 11:36:50.333660    1534 addons.go:228] Setting addon storage-provisioner=true in "addons-514000"
	I0524 11:36:50.333671    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333804    1534 addons.go:66] Setting metrics-server=true in profile "addons-514000"
	I0524 11:36:50.333879    1534 addons.go:228] Setting addon metrics-server=true in "addons-514000"
	I0524 11:36:50.333906    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.333926    1534 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.333947    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 11:36:50.333682    1534 addons.go:66] Setting ingress-dns=true in profile "addons-514000"
	I0524 11:36:50.333976    1534 addons.go:228] Setting addon ingress-dns=true in "addons-514000"
	I0524 11:36:50.333995    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334035    1534 addons.go:66] Setting gcp-auth=true in profile "addons-514000"
	I0524 11:36:50.333605    1534 addons.go:66] Setting volumesnapshots=true in profile "addons-514000"
	I0524 11:36:50.334092    1534 addons.go:228] Setting addon volumesnapshots=true in "addons-514000"
	I0524 11:36:50.334116    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334159    1534 addons.go:66] Setting default-storageclass=true in profile "addons-514000"
	I0524 11:36:50.334172    1534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0524 11:36:50.333653    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334095    1534 mustload.go:65] Loading cluster: addons-514000
	I0524 11:36:50.334706    1534 addons.go:66] Setting inspektor-gadget=true in profile "addons-514000"
	I0524 11:36:50.334713    1534 addons.go:228] Setting addon inspektor-gadget=true in "addons-514000"
	I0524 11:36:50.333694    1534 addons.go:66] Setting cloud-spanner=true in profile "addons-514000"
	I0524 11:36:50.334861    1534 addons.go:228] Setting addon cloud-spanner=true in "addons-514000"
	I0524 11:36:50.334877    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334897    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.334942    1534 host.go:66] Checking if "addons-514000" exists ...
	W0524 11:36:50.335292    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335303    1534 addons.go:274] "addons-514000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335306    1534 addons.go:464] Verifying addon metrics-server=true in "addons-514000"
	W0524 11:36:50.335329    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335333    1534 addons.go:274] "addons-514000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335353    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335359    1534 addons.go:274] "addons-514000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0524 11:36:50.335362    1534 addons.go:464] Verifying addon registry=true in "addons-514000"
	W0524 11:36:50.335391    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.339535    1534 out.go:177] * Verifying registry addon...
	W0524 11:36:50.335411    1534 addons.go:274] "addons-514000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0524 11:36:50.335412    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335520    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.335588    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0524 11:36:50.335599    1534 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0524 11:36:50.335650    1534 host.go:54] host status for "addons-514000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/monitor: connect: connection refused
	W0524 11:36:50.349556    1534 addons.go:274] "addons-514000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0524 11:36:50.349673    1534 addons.go:274] "addons-514000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0524 11:36:50.349673    1534 addons.go:464] Verifying addon ingress=true in "addons-514000"
	W0524 11:36:50.349688    1534 addons_storage_classes.go:55] "addons-514000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0524 11:36:50.349678    1534 addons.go:274] "addons-514000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0524 11:36:50.350008    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0524 11:36:50.350257    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.353441    1534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 11:36:50.357663    1534 addons.go:228] Setting addon default-storageclass=true in "addons-514000"
	I0524 11:36:50.360618    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:50.360641    1534 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.360646    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 11:36:50.360653    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.365539    1534 out.go:177] * Verifying ingress addon...
	I0524 11:36:50.357776    1534 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0524 11:36:50.357776    1534 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0524 11:36:50.361446    1534 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.364279    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0524 11:36:50.381598    1534 out.go:177] * Verifying csi-hostpath-driver addon...
	I0524 11:36:50.369698    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 11:36:50.369727    1534 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0524 11:36:50.375900    1534 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0524 11:36:50.387638    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0524 11:36:50.387638    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.387646    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:50.388147    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0524 11:36:50.390627    1534 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0524 11:36:50.391169    1534 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0524 11:36:50.400375    1534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 11:36:50.433263    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 11:36:50.499595    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0524 11:36:50.499607    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0524 11:36:50.511369    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 11:36:50.545082    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0524 11:36:50.545093    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0524 11:36:50.571075    1534 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0524 11:36:50.571085    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0524 11:36:50.614490    1534 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0524 11:36:50.614502    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0524 11:36:50.628252    1534 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.628261    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0524 11:36:50.647925    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:50.858973    1534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0524 11:36:50.859000    1534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 11:36:50.862644    1534 out.go:177] * Verifying Kubernetes components...
	I0524 11:36:50.870714    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:51.015230    1534 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0524 11:36:51.239743    1534 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.239769    1534 retry.go:31] will retry after 300.967986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0524 11:36:51.240163    1534 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242031    1534 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0524 11:36:51.242040    1534 node_ready.go:38] duration metric: took 1.869375ms waiting for node "addons-514000" to be "Ready" ...
	I0524 11:36:51.242043    1534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:51.247820    1534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:51.542933    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0524 11:36:53.257970    1534 pod_ready.go:92] pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.257986    1534 pod_ready.go:81] duration metric: took 2.01016425s waiting for pod "coredns-5d78c9869d-dmkfx" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.257991    1534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260855    1534 pod_ready.go:92] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.260862    1534 pod_ready.go:81] duration metric: took 2.866833ms waiting for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.260867    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263593    1534 pod_ready.go:92] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.263598    1534 pod_ready.go:81] duration metric: took 2.728ms waiting for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.263603    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266314    1534 pod_ready.go:92] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.266322    1534 pod_ready.go:81] duration metric: took 2.716417ms waiting for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.266326    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268820    1534 pod_ready.go:92] pod "kube-proxy-2gj6m" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.268826    1534 pod_ready.go:81] duration metric: took 2.496209ms waiting for pod "kube-proxy-2gj6m" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.268830    1534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659694    1534 pod_ready.go:92] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0524 11:36:53.659709    1534 pod_ready.go:81] duration metric: took 390.87725ms waiting for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0524 11:36:53.659719    1534 pod_ready.go:38] duration metric: took 2.417685875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 11:36:53.659737    1534 api_server.go:52] waiting for apiserver process to appear ...
	I0524 11:36:53.659818    1534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 11:36:54.012047    1534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.469105375s)
	I0524 11:36:54.012061    1534 api_server.go:72] duration metric: took 3.153054583s to wait for apiserver process to appear ...
	I0524 11:36:54.012066    1534 api_server.go:88] waiting for apiserver healthz status ...
	I0524 11:36:54.012074    1534 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0524 11:36:54.015086    1534 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0524 11:36:54.015747    1534 api_server.go:141] control plane version: v1.27.2
	I0524 11:36:54.015755    1534 api_server.go:131] duration metric: took 3.685917ms to wait for apiserver health ...
	I0524 11:36:54.015758    1534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 11:36:54.018844    1534 system_pods.go:59] 9 kube-system pods found
	I0524 11:36:54.018857    1534 system_pods.go:61] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.018861    1534 system_pods.go:61] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.018863    1534 system_pods.go:61] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.018865    1534 system_pods.go:61] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.018868    1534 system_pods.go:61] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.018870    1534 system_pods.go:61] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.018873    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018876    1534 system_pods.go:61] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.018879    1534 system_pods.go:61] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.018881    1534 system_pods.go:74] duration metric: took 3.121167ms to wait for pod list to return data ...
	I0524 11:36:54.018883    1534 default_sa.go:34] waiting for default service account to be created ...
	I0524 11:36:54.057892    1534 default_sa.go:45] found service account: "default"
	I0524 11:36:54.057899    1534 default_sa.go:55] duration metric: took 39.013541ms for default service account to be created ...
	I0524 11:36:54.057902    1534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 11:36:54.259995    1534 system_pods.go:86] 9 kube-system pods found
	I0524 11:36:54.260005    1534 system_pods.go:89] "coredns-5d78c9869d-dmkfx" [03f7079a-733f-488e-a6b1-65e0ca65cffe] Running
	I0524 11:36:54.260008    1534 system_pods.go:89] "etcd-addons-514000" [f53490df-9f65-472d-8c24-fedfb5ff0b40] Running
	I0524 11:36:54.260011    1534 system_pods.go:89] "kube-apiserver-addons-514000" [39ad1080-0c49-4108-8f04-7aded262c978] Running
	I0524 11:36:54.260014    1534 system_pods.go:89] "kube-controller-manager-addons-514000" [8aed5f2f-4439-495f-9454-34920caed37f] Running
	I0524 11:36:54.260016    1534 system_pods.go:89] "kube-proxy-2gj6m" [8d0787ef-32df-4a09-ad1d-48a69a40bb43] Running
	I0524 11:36:54.260019    1534 system_pods.go:89] "kube-scheduler-addons-514000" [98ce7db6-4f70-4fb1-992f-7cf2bf68fb43] Running
	I0524 11:36:54.260023    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-j5jhp" [953964d7-20b0-4fa4-b1ca-f05e6adb94e3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260027    1534 system_pods.go:89] "snapshot-controller-75bbb956b9-txrxl" [07fe4bd3-7311-4ec4-938d-a195ed8535a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0524 11:36:54.260030    1534 system_pods.go:89] "storage-provisioner" [49765353-a5e1-4781-ab06-a554a575daa5] Running
	I0524 11:36:54.260033    1534 system_pods.go:126] duration metric: took 202.129584ms to wait for k8s-apps to be running ...
	I0524 11:36:54.260037    1534 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 11:36:54.260088    1534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 11:36:54.265390    1534 system_svc.go:56] duration metric: took 5.350666ms WaitForService to wait for kubelet.
	I0524 11:36:54.265399    1534 kubeadm.go:581] duration metric: took 3.406395625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 11:36:54.265408    1534 node_conditions.go:102] verifying NodePressure condition ...
	I0524 11:36:54.458086    1534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 11:36:54.458097    1534 node_conditions.go:123] node cpu capacity is 2
	I0524 11:36:54.458103    1534 node_conditions.go:105] duration metric: took 192.694167ms to run NodePressure ...
	I0524 11:36:54.458107    1534 start.go:228] waiting for startup goroutines ...
	I0524 11:36:56.972492    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0524 11:36:56.972559    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.029376    1534 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0524 11:36:57.038824    1534 addons.go:228] Setting addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.038864    1534 host.go:66] Checking if "addons-514000" exists ...
	I0524 11:36:57.040182    1534 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0524 11:36:57.040196    1534 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0524 11:36:57.078053    1534 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0524 11:36:57.082115    1534 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0524 11:36:57.085015    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0524 11:36:57.085022    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0524 11:36:57.091862    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0524 11:36:57.091873    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0524 11:36:57.099462    1534 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.099472    1534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0524 11:36:57.106631    1534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0524 11:36:57.550488    1534 addons.go:464] Verifying addon gcp-auth=true in "addons-514000"
	I0524 11:36:57.555392    1534 out.go:177] * Verifying gcp-auth addon...
	I0524 11:36:57.561721    1534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0524 11:36:57.566760    1534 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0524 11:36:57.566769    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.070711    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:58.570942    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.076515    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:36:59.570540    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.070962    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:00.571104    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.071573    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:01.571018    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.072518    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:02.570869    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.071445    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:03.570661    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.070807    1534 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0524 11:37:04.570832    1534 kapi.go:107] duration metric: took 7.009157292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0524 11:37:04.574809    1534 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0524 11:37:04.579620    1534 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0524 11:37:04.583658    1534 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0524 11:42:50.357445    1534 kapi.go:107] duration metric: took 6m0.009773291s to wait for kubernetes.io/minikube-addons=registry ...
	W0524 11:42:50.357907    1534 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0524 11:42:50.387243    1534 kapi.go:107] duration metric: took 6m0.001495875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0524 11:42:50.387315    1534 kapi.go:107] duration metric: took 6m0.013814333s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0524 11:42:50.387474    1534 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0524 11:42:50.395532    1534 out.go:177] * Enabled addons: metrics-server, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0524 11:42:50.403494    1534 addons.go:499] enable addons completed in 6m0.072361709s: enabled=[metrics-server ingress-dns inspektor-gadget cloud-spanner storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0524 11:42:50.403556    1534 start.go:233] waiting for cluster config update ...
	I0524 11:42:50.403587    1534 start.go:242] writing updated cluster config ...
	I0524 11:42:50.408325    1534 ssh_runner.go:195] Run: rm -f paused
	I0524 11:42:50.568016    1534 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 11:42:50.572568    1534 out.go:177] 
	W0524 11:42:50.576443    1534 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 11:42:50.580476    1534 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 11:42:50.587567    1534 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 18:56:42 UTC. --
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516120296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.516129229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.568789730Z" level=info msg="ignoring event" container=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569050551Z" level=info msg="shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569107330Z" level=warning msg="cleaning up after shim disconnected" id=1c671f6b3a5b5c85e0407b694ee5e6e8a1a70743c986a11000612952822acf1e namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.569117420Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607638824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607702137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607716942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:01 addons-514000 dockerd[922]: time="2023-05-24T18:37:01.607727942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:01 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f47df037d99569ad6cd8f4ef2c3926ab0aed2bb5b85f513c520fc0abc42c67f3/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 18:37:01 addons-514000 dockerd[916]: time="2023-05-24T18:37:01.953788187Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557702813Z" level=info msg="shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557754555Z" level=warning msg="cleaning up after shim disconnected" id=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 namespace=moby
	May 24 18:37:02 addons-514000 dockerd[922]: time="2023-05-24T18:37:02.557759977Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:02 addons-514000 dockerd[916]: time="2023-05-24T18:37:02.558086156Z" level=info msg="ignoring event" container=953c17ffdc8ac0b98e66b9d8887f99c9b0a49236e6fd43f2647edb75c5e82150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[916]: time="2023-05-24T18:37:03.602683250Z" level=info msg="ignoring event" container=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603089611Z" level=info msg="shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603147527Z" level=warning msg="cleaning up after shim disconnected" id=7c145b5b88ef04d8237d20d8374f3a2c4283c8c7c120b4d01e6d9eda3df3496d namespace=moby
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.603154445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 18:37:03 addons-514000 cri-dockerd[1138]: time="2023-05-24T18:37:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856707697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856808407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856985177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 18:37:03 addons-514000 dockerd[922]: time="2023-05-24T18:37:03.856997233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	d1ad6d2cd7d4d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              19 minutes ago      Running             gcp-auth                     0                   f47df037d9956
	2623eeac77855       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   60ea5019d1f26
	61fdb94dca547       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   1f82f1afb5ca6
	5e708965dbb0a       97e04611ad434                                                                                                             19 minutes ago      Running             coredns                      0                   eaf04536825bb
	c6d1bdca910b8       ba04bb24b9575                                                                                                             19 minutes ago      Running             storage-provisioner          0                   55be207be2898
	bf84d832ec967       29921a0845422                                                                                                             19 minutes ago      Running             kube-proxy                   0                   59d50204b0754
	046435c695b1e       305d7ed1dae28                                                                                                             20 minutes ago      Running             kube-scheduler               0                   cd9a002bb369c
	aa80b21f85087       2ee705380c3c5                                                                                                             20 minutes ago      Running             kube-controller-manager      0                   0ebf3f27cb768
	d5556d8565d49       24bc64e911039                                                                                                             20 minutes ago      Running             etcd                         0                   37fcc92ec98a7
	a485542b186e4       72c9df6be7f1b                                                                                                             20 minutes ago      Running             kube-apiserver               0                   383872bb10f81
	
	* 
	* ==> coredns [5e708965dbb0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59819 - 54023 "HINFO IN 5089267470380203033.66065138292483152. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.424436073s
	[INFO] 10.244.0.7:57634 - 60032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112931s
	[INFO] 10.244.0.7:36916 - 20311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078547s
	[INFO] 10.244.0.7:53888 - 30613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056548s
	[INFO] 10.244.0.7:40805 - 41575 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000031112s
	[INFO] 10.244.0.7:39418 - 54110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031567s
	[INFO] 10.244.0.7:45485 - 20279 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113676s
	[INFO] 10.244.0.7:49511 - 45953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000780781s
	[INFO] 10.244.0.7:49660 - 37020 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00090552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T11_36_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 18:56:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 18:52:27 +0000   Wed, 24 May 2023 18:36:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc66183cd0c646be999944d821185b81
	  System UUID:                cc66183cd0c646be999944d821185b81
	  Boot ID:                    2cd753bf-40ed-44ce-928e-d8bb002a6012
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-5429c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5d78c9869d-dmkfx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-514000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-addons-514000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-addons-514000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-2gj6m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-514000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 snapshot-controller-75bbb956b9-j5jhp     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-txrxl     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 18:36] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.727578] EINJ: EINJ table not found.
	[  +0.656332] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000915] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.905553] systemd-fstab-generator[471]: Ignoring "noauto" for root device
	[  +0.096232] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +2.874276] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +1.463827] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.166355] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.076432] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.091985] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +1.135416] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091978] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.084182] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.089221] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.079548] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.085105] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +2.454751] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
	[  +5.146027] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[ +14.118818] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.617169] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.922890] kauditd_printk_skb: 33 callbacks suppressed
	[May24 18:37] kauditd_printk_skb: 17 callbacks suppressed
	
	* 
	* ==> etcd [d5556d8565d4] <==
	* {"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-514000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T18:36:33.834Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:36:33.838Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T18:46:33.876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2023-05-24T18:46:33.881Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.400485ms","hash":3638416343}
	{"level":"info","ts":"2023-05-24T18:46:33.882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638416343,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2023-05-24T18:51:33.887Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1145}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1145,"took":"2.024563ms","hash":894933936}
	{"level":"info","ts":"2023-05-24T18:51:33.890Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":894933936,"revision":1145,"compact-revision":846}
	{"level":"info","ts":"2023-05-24T18:56:33.899Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1444}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1444,"took":"1.805765ms","hash":2332186912}
	{"level":"info","ts":"2023-05-24T18:56:33.902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2332186912,"revision":1444,"compact-revision":1145}
	
	* 
	* ==> gcp-auth [d1ad6d2cd7d4] <==
	* 2023/05/24 18:37:03 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  18:56:42 up 20 min,  0 users,  load average: 0.84, 0.70, 0.51
	Linux addons-514000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a485542b186e] <==
	* I0524 18:36:51.393166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.395637       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.395722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:51.400840       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:36:51.401085       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:36:57.593427       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.98.17.234]
	I0524 18:36:57.610279       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0524 18:41:34.543984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:41:34.544093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.544191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.544366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.554262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.554305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:46:34.559325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:46:34.559355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.542946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.543557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.550014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.550133       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:51:34.556769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:51:34.556848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.536214       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.536373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0524 18:56:34.548003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0524 18:56:34.548106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [aa80b21f8508] <==
	* I0524 18:37:01.505510       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:01.519013       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.495937       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:02.582819       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.506916       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.513803       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:03.593747       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.596306       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598360       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:03.598521       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:03.685792       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.524940       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.535090       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.540969       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:04.541239       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0524 18:37:04.555353       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:20.560907       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0524 18:37:20.561333       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0524 18:37:20.662721       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 18:37:20.895710       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0524 18:37:20.999329       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 18:37:33.024397       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:33.041354       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0524 18:37:34.012720       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0524 18:37:34.026381       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [bf84d832ec96] <==
	* I0524 18:36:51.096070       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0524 18:36:51.096254       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0524 18:36:51.096305       1 server_others.go:551] "Using iptables proxy"
	I0524 18:36:51.129985       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 18:36:51.130045       1 server_others.go:190] "Using iptables Proxier"
	I0524 18:36:51.130091       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 18:36:51.130875       1 server.go:657] "Version info" version="v1.27.2"
	I0524 18:36:51.130883       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 18:36:51.134580       1 config.go:188] "Starting service config controller"
	I0524 18:36:51.134608       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 18:36:51.134627       1 config.go:97] "Starting endpoint slice config controller"
	I0524 18:36:51.134630       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 18:36:51.134949       1 config.go:315] "Starting node config controller"
	I0524 18:36:51.134952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 18:36:51.240491       1 shared_informer.go:318] Caches are synced for node config
	I0524 18:36:51.240513       1 shared_informer.go:318] Caches are synced for service config
	I0524 18:36:51.240529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [046435c695b1] <==
	* W0524 18:36:34.551296       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 18:36:34.551335       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 18:36:34.555158       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 18:36:34.555224       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 18:36:34.555257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:34.555277       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:34.555318       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:34.555338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:34.555364       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:34.555398       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:34.555416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:34.555434       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.414754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 18:36:35.414831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 18:36:35.419590       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 18:36:35.419621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 18:36:35.431658       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 18:36:35.431697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 18:36:35.542100       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 18:36:35.542130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 18:36:35.557940       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 18:36:35.558018       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 18:36:35.599004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 18:36:35.599089       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0524 18:36:36.142741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 18:36:19 UTC, ends at Wed 2023-05-24 18:56:42 UTC. --
	May 24 18:51:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:51:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:51:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:51:37 addons-514000 kubelet[2266]: W0524 18:51:37.212802    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 18:52:37 addons-514000 kubelet[2266]: E0524 18:52:37.222293    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:52:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:52:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:52:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:53:37 addons-514000 kubelet[2266]: E0524 18:53:37.219445    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:53:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:53:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:53:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:54:37 addons-514000 kubelet[2266]: E0524 18:54:37.209773    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:54:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:54:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:54:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:55:37 addons-514000 kubelet[2266]: E0524 18:55:37.209128    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:55:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:55:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:55:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 18:56:37 addons-514000 kubelet[2266]: W0524 18:56:37.212870    2266 machine.go:65] Cannot read vendor id correctly, set empty.
	May 24 18:56:37 addons-514000 kubelet[2266]: E0524 18:56:37.214696    2266 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 18:56:37 addons-514000 kubelet[2266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 18:56:37 addons-514000 kubelet[2266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 18:56:37 addons-514000 kubelet[2266]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [c6d1bdca910b] <==
	* I0524 18:36:52.162540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 18:36:52.179095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 18:36:52.179236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 18:36:52.184538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 18:36:52.185437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	I0524 18:36:52.187871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d6698b6-9eb5-4aee-aab5-f9c270917482", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a became leader
	I0524 18:36:52.285999       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_06403223-0fe7-4b51-9128-30a4ab7aa15a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (832.26s)

                                                
                                    
x
+
TestAddons/serial (0s)

                                                
                                                
=== RUN   TestAddons/serial
addons_test.go:138: Unable to run more tests (deadline exceeded)
--- FAIL: TestAddons/serial (0.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (0s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-514000
addons_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p addons-514000: context deadline exceeded (583ns)
addons_test.go:150: failed to stop minikube. args "out/minikube-darwin-arm64 stop -p addons-514000" : context deadline exceeded
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-514000
addons_test.go:152: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-514000: context deadline exceeded (84ns)
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-darwin-arm64 addons enable dashboard -p addons-514000" : context deadline exceeded
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-514000
addons_test.go:156: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-514000: context deadline exceeded (42ns)
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-darwin-arm64 addons disable dashboard -p addons-514000" : context deadline exceeded
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-514000
addons_test.go:161: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable gvisor -p addons-514000: context deadline exceeded (42ns)
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-darwin-arm64 addons disable gvisor -p addons-514000" : context deadline exceeded
--- FAIL: TestAddons/StoppedEnableDisable (0.00s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-715000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-715000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.852902167s)

                                                
                                                
-- stdout --
	* [cert-options-715000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-715000 in cluster cert-options-715000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-715000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-715000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-715000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-715000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-715000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.697541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-715000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-715000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-715000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-715000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-715000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.09825ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-715000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-715000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-715000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-05-24 12:29:31.021748 -0700 PDT m=+3227.891189501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-715000 -n cert-options-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-715000 -n cert-options-715000: exit status 7 (28.068917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-715000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-715000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-715000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (195.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.812764209s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0524 12:29:29.257504    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.167657208s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-334000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-334000 in cluster cert-expiration-334000
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-05-24 12:32:30.852805 -0700 PDT m=+3407.724033084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000: exit status 7 (29.780125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-334000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-334000
--- FAIL: TestCertExpiration (195.12s)

                                                
                                    
x
+
TestDockerFlags (10.03s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-760000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-760000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.779679292s)

                                                
                                                
-- stdout --
	* [docker-flags-760000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-760000 in cluster docker-flags-760000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-760000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:29:11.009945    3972 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:29:11.010091    3972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:11.010094    3972 out.go:309] Setting ErrFile to fd 2...
	I0524 12:29:11.010096    3972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:11.010168    3972 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:29:11.011315    3972 out.go:303] Setting JSON to false
	I0524 12:29:11.026554    3972 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3522,"bootTime":1684953029,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:29:11.026635    3972 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:29:11.031073    3972 out.go:177] * [docker-flags-760000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:29:11.038861    3972 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:29:11.038877    3972 notify.go:220] Checking for updates...
	I0524 12:29:11.045971    3972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:29:11.049009    3972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:29:11.051980    3972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:29:11.055007    3972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:29:11.056400    3972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:29:11.059278    3972 config.go:182] Loaded profile config "force-systemd-flag-032000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:29:11.059366    3972 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:29:11.059385    3972 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:29:11.063986    3972 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:29:11.068927    3972 start.go:295] selected driver: qemu2
	I0524 12:29:11.068933    3972 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:29:11.068938    3972 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:29:11.070775    3972 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:29:11.075017    3972 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:29:11.076426    3972 start_flags.go:910] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0524 12:29:11.076446    3972 cni.go:84] Creating CNI manager for ""
	I0524 12:29:11.076461    3972 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:29:11.076466    3972 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:29:11.076471    3972 start_flags.go:319] config:
	{Name:docker-flags-760000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-760000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0524 12:29:11.076547    3972 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:29:11.085002    3972 out.go:177] * Starting control plane node docker-flags-760000 in cluster docker-flags-760000
	I0524 12:29:11.088956    3972 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:29:11.088974    3972 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:29:11.088983    3972 cache.go:57] Caching tarball of preloaded images
	I0524 12:29:11.089029    3972 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:29:11.089034    3972 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:29:11.089078    3972 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/docker-flags-760000/config.json ...
	I0524 12:29:11.089089    3972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/docker-flags-760000/config.json: {Name:mkb4bdf93b6654a44a2793d6ca5998e4e5af6349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:29:11.089255    3972 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:29:11.089267    3972 start.go:364] acquiring machines lock for docker-flags-760000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:11.089295    3972 start.go:368] acquired machines lock for "docker-flags-760000" in 23.25µs
	I0524 12:29:11.089306    3972 start.go:93] Provisioning new machine with config: &{Name:docker-flags-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-760000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:11.089328    3972 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:11.095932    3972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:11.110881    3972 start.go:159] libmachine.API.Create for "docker-flags-760000" (driver="qemu2")
	I0524 12:29:11.110898    3972 client.go:168] LocalClient.Create starting
	I0524 12:29:11.110970    3972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:11.110991    3972 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:11.111001    3972 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:11.111029    3972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:11.111047    3972 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:11.111052    3972 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:11.111336    3972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:11.227671    3972 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:11.267528    3972 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:11.267539    3972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:11.267686    3972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:11.276252    3972 main.go:141] libmachine: STDOUT: 
	I0524 12:29:11.276264    3972 main.go:141] libmachine: STDERR: 
	I0524 12:29:11.276307    3972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2 +20000M
	I0524 12:29:11.283478    3972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:11.283489    3972 main.go:141] libmachine: STDERR: 
	I0524 12:29:11.283508    3972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:11.283514    3972 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:11.283560    3972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:5f:76:17:d8:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:11.285148    3972 main.go:141] libmachine: STDOUT: 
	I0524 12:29:11.285159    3972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:11.285181    3972 client.go:171] LocalClient.Create took 174.278875ms
	I0524 12:29:13.287324    3972 start.go:128] duration metric: createHost completed in 2.197996834s
	I0524 12:29:13.287381    3972 start.go:83] releasing machines lock for "docker-flags-760000", held for 2.198095208s
	W0524 12:29:13.287463    3972 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:13.304623    3972 out.go:177] * Deleting "docker-flags-760000" in qemu2 ...
	W0524 12:29:13.319766    3972 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:13.319806    3972 start.go:702] Will try again in 5 seconds ...
	I0524 12:29:18.322063    3972 start.go:364] acquiring machines lock for docker-flags-760000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:18.365782    3972 start.go:368] acquired machines lock for "docker-flags-760000" in 43.606625ms
	I0524 12:29:18.365906    3972 start.go:93] Provisioning new machine with config: &{Name:docker-flags-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-760000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:18.366176    3972 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:18.372045    3972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:18.420522    3972 start.go:159] libmachine.API.Create for "docker-flags-760000" (driver="qemu2")
	I0524 12:29:18.420569    3972 client.go:168] LocalClient.Create starting
	I0524 12:29:18.420730    3972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:18.420780    3972 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:18.420800    3972 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:18.420894    3972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:18.420937    3972 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:18.420950    3972 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:18.421569    3972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:18.546183    3972 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:18.700429    3972 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:18.700439    3972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:18.700611    3972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:18.709442    3972 main.go:141] libmachine: STDOUT: 
	I0524 12:29:18.709457    3972 main.go:141] libmachine: STDERR: 
	I0524 12:29:18.709509    3972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2 +20000M
	I0524 12:29:18.716664    3972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:18.716676    3972 main.go:141] libmachine: STDERR: 
	I0524 12:29:18.716697    3972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:18.716703    3972 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:18.716744    3972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e3:c5:7e:08:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/docker-flags-760000/disk.qcow2
	I0524 12:29:18.718254    3972 main.go:141] libmachine: STDOUT: 
	I0524 12:29:18.718269    3972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:18.718280    3972 client.go:171] LocalClient.Create took 297.708334ms
	I0524 12:29:20.720467    3972 start.go:128] duration metric: createHost completed in 2.354284584s
	I0524 12:29:20.720526    3972 start.go:83] releasing machines lock for "docker-flags-760000", held for 2.354725s
	W0524 12:29:20.721220    3972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:20.730897    3972 out.go:177] 
	W0524 12:29:20.737241    3972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:29:20.737277    3972 out.go:239] * 
	* 
	W0524 12:29:20.739703    3972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:29:20.748903    3972 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-760000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-760000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-760000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.068167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-760000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-760000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-760000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-760000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-760000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-760000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (44.459334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-760000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-760000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-760000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-760000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-05-24 12:29:20.888557 -0700 PDT m=+3217.757897459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-760000 -n docker-flags-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-760000 -n docker-flags-760000: exit status 7 (27.149291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-760000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-760000
--- FAIL: TestDockerFlags (10.03s)

                                                
                                    
x
+
TestForceSystemdFlag (12.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-032000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-032000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.88817625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-032000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-032000 in cluster force-systemd-flag-032000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:29:03.797533    3951 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:29:03.797653    3951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:03.797656    3951 out.go:309] Setting ErrFile to fd 2...
	I0524 12:29:03.797659    3951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:03.797727    3951 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:29:03.798732    3951 out.go:303] Setting JSON to false
	I0524 12:29:03.813737    3951 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3514,"bootTime":1684953029,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:29:03.813809    3951 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:29:03.819219    3951 out.go:177] * [force-systemd-flag-032000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:29:03.827172    3951 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:29:03.827176    3951 notify.go:220] Checking for updates...
	I0524 12:29:03.833142    3951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:29:03.837218    3951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:29:03.840160    3951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:29:03.843188    3951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:29:03.846195    3951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:29:03.849297    3951 config.go:182] Loaded profile config "force-systemd-env-895000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:29:03.849363    3951 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:29:03.849381    3951 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:29:03.853091    3951 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:29:03.863128    3951 start.go:295] selected driver: qemu2
	I0524 12:29:03.863137    3951 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:29:03.863143    3951 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:29:03.865159    3951 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:29:03.868120    3951 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:29:03.871211    3951 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 12:29:03.871227    3951 cni.go:84] Creating CNI manager for ""
	I0524 12:29:03.871236    3951 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:29:03.871240    3951 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:29:03.871247    3951 start_flags.go:319] config:
	{Name:force-systemd-flag-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:29:03.871322    3951 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:29:03.880154    3951 out.go:177] * Starting control plane node force-systemd-flag-032000 in cluster force-systemd-flag-032000
	I0524 12:29:03.883916    3951 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:29:03.883941    3951 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:29:03.883961    3951 cache.go:57] Caching tarball of preloaded images
	I0524 12:29:03.884027    3951 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:29:03.884032    3951 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:29:03.884081    3951 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/force-systemd-flag-032000/config.json ...
	I0524 12:29:03.884094    3951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/force-systemd-flag-032000/config.json: {Name:mkc33d81267e0037c4e345ab81358c4b19d69131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:29:03.884295    3951 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:29:03.884310    3951 start.go:364] acquiring machines lock for force-systemd-flag-032000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:03.884340    3951 start.go:368] acquired machines lock for "force-systemd-flag-032000" in 24.708µs
	I0524 12:29:03.884354    3951 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:03.884384    3951 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:03.892998    3951 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:03.909675    3951 start.go:159] libmachine.API.Create for "force-systemd-flag-032000" (driver="qemu2")
	I0524 12:29:03.909702    3951 client.go:168] LocalClient.Create starting
	I0524 12:29:03.909776    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:03.909804    3951 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:03.909819    3951 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:03.909866    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:03.909882    3951 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:03.909896    3951 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:03.910248    3951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:04.023770    3951 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:04.096194    3951 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:04.096200    3951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:04.096356    3951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:04.105080    3951 main.go:141] libmachine: STDOUT: 
	I0524 12:29:04.105106    3951 main.go:141] libmachine: STDERR: 
	I0524 12:29:04.105160    3951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2 +20000M
	I0524 12:29:04.112199    3951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:04.112211    3951 main.go:141] libmachine: STDERR: 
	I0524 12:29:04.112227    3951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:04.112234    3951 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:04.112272    3951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:43:51:e8:ae:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:04.113794    3951 main.go:141] libmachine: STDOUT: 
	I0524 12:29:04.113811    3951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:04.113839    3951 client.go:171] LocalClient.Create took 204.131958ms
	I0524 12:29:06.115985    3951 start.go:128] duration metric: createHost completed in 2.231601542s
	I0524 12:29:06.116306    3951 start.go:83] releasing machines lock for "force-systemd-flag-032000", held for 2.231974s
	W0524 12:29:06.116368    3951 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:06.127696    3951 out.go:177] * Deleting "force-systemd-flag-032000" in qemu2 ...
	W0524 12:29:06.149148    3951 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:06.149179    3951 start.go:702] Will try again in 5 seconds ...
	I0524 12:29:11.151238    3951 start.go:364] acquiring machines lock for force-systemd-flag-032000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:13.287518    3951 start.go:368] acquired machines lock for "force-systemd-flag-032000" in 2.136260667s
	I0524 12:29:13.287686    3951 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-032000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-032000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:13.287979    3951 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:13.296629    3951 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:13.341665    3951 start.go:159] libmachine.API.Create for "force-systemd-flag-032000" (driver="qemu2")
	I0524 12:29:13.341713    3951 client.go:168] LocalClient.Create starting
	I0524 12:29:13.341871    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:13.341922    3951 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:13.341937    3951 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:13.342017    3951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:13.342046    3951 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:13.342059    3951 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:13.342580    3951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:13.467378    3951 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:13.596847    3951 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:13.596852    3951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:13.597024    3951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:13.606108    3951 main.go:141] libmachine: STDOUT: 
	I0524 12:29:13.606122    3951 main.go:141] libmachine: STDERR: 
	I0524 12:29:13.606174    3951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2 +20000M
	I0524 12:29:13.613375    3951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:13.613388    3951 main.go:141] libmachine: STDERR: 
	I0524 12:29:13.613403    3951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:13.613410    3951 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:13.613456    3951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:af:78:3d:65:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-flag-032000/disk.qcow2
	I0524 12:29:13.614997    3951 main.go:141] libmachine: STDOUT: 
	I0524 12:29:13.615009    3951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:13.615023    3951 client.go:171] LocalClient.Create took 273.306708ms
	I0524 12:29:15.617166    3951 start.go:128] duration metric: createHost completed in 2.329186167s
	I0524 12:29:15.617232    3951 start.go:83] releasing machines lock for "force-systemd-flag-032000", held for 2.329681125s
	W0524 12:29:15.617949    3951 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:15.629695    3951 out.go:177] 
	W0524 12:29:15.634605    3951 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:29:15.634630    3951 out.go:239] * 
	* 
	W0524 12:29:15.637245    3951 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:29:15.646512    3951 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-032000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-032000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-032000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.015417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-032000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-032000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-05-24 12:29:15.74158 -0700 PDT m=+3212.610869959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-032000 -n force-systemd-flag-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-032000 -n force-systemd-flag-032000: exit status 7 (32.096875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-032000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-032000
--- FAIL: TestForceSystemdFlag (12.10s)

                                                
                                    
x
+
TestForceSystemdEnv (9.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-895000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
E0524 12:29:01.546946    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-895000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.745027917s)

                                                
                                                
-- stdout --
	* [force-systemd-env-895000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-895000 in cluster force-systemd-env-895000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-895000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:29:01.047255    3933 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:29:01.047362    3933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:01.047365    3933 out.go:309] Setting ErrFile to fd 2...
	I0524 12:29:01.047368    3933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:29:01.047434    3933 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:29:01.048455    3933 out.go:303] Setting JSON to false
	I0524 12:29:01.063480    3933 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3512,"bootTime":1684953029,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:29:01.063541    3933 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:29:01.068006    3933 out.go:177] * [force-systemd-env-895000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:29:01.075172    3933 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:29:01.075198    3933 notify.go:220] Checking for updates...
	I0524 12:29:01.078122    3933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:29:01.086179    3933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:29:01.090122    3933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:29:01.093149    3933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:29:01.097238    3933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0524 12:29:01.101416    3933 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:29:01.101439    3933 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:29:01.105161    3933 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:29:01.112181    3933 start.go:295] selected driver: qemu2
	I0524 12:29:01.112187    3933 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:29:01.112195    3933 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:29:01.114211    3933 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:29:01.118159    3933 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:29:01.121284    3933 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 12:29:01.121296    3933 cni.go:84] Creating CNI manager for ""
	I0524 12:29:01.121305    3933 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:29:01.121312    3933 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:29:01.121318    3933 start_flags.go:319] config:
	{Name:force-systemd-env-895000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-895000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:29:01.121384    3933 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:29:01.130198    3933 out.go:177] * Starting control plane node force-systemd-env-895000 in cluster force-systemd-env-895000
	I0524 12:29:01.134151    3933 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:29:01.134175    3933 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:29:01.134188    3933 cache.go:57] Caching tarball of preloaded images
	I0524 12:29:01.134249    3933 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:29:01.134255    3933 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:29:01.134311    3933 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/force-systemd-env-895000/config.json ...
	I0524 12:29:01.134324    3933 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/force-systemd-env-895000/config.json: {Name:mkab5dd49ae55515a6be53e648d1712175f79983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:29:01.134554    3933 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:29:01.134569    3933 start.go:364] acquiring machines lock for force-systemd-env-895000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:01.134599    3933 start.go:368] acquired machines lock for "force-systemd-env-895000" in 24.834µs
	I0524 12:29:01.134614    3933 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-895000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:01.134639    3933 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:01.142190    3933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:01.158756    3933 start.go:159] libmachine.API.Create for "force-systemd-env-895000" (driver="qemu2")
	I0524 12:29:01.158777    3933 client.go:168] LocalClient.Create starting
	I0524 12:29:01.158862    3933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:01.158882    3933 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:01.158891    3933 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:01.158918    3933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:01.158933    3933 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:01.158940    3933 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:01.159241    3933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:01.272522    3933 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:01.379448    3933 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:01.379455    3933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:01.379611    3933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:01.388287    3933 main.go:141] libmachine: STDOUT: 
	I0524 12:29:01.388300    3933 main.go:141] libmachine: STDERR: 
	I0524 12:29:01.388353    3933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2 +20000M
	I0524 12:29:01.395668    3933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:01.395696    3933 main.go:141] libmachine: STDERR: 
	I0524 12:29:01.395720    3933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:01.395726    3933 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:01.395758    3933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:aa:3d:a4:18:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:01.397343    3933 main.go:141] libmachine: STDOUT: 
	I0524 12:29:01.397358    3933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:01.397377    3933 client.go:171] LocalClient.Create took 238.59725ms
	I0524 12:29:03.399523    3933 start.go:128] duration metric: createHost completed in 2.264886042s
	I0524 12:29:03.399589    3933 start.go:83] releasing machines lock for "force-systemd-env-895000", held for 2.265003s
	W0524 12:29:03.399678    3933 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:03.406277    3933 out.go:177] * Deleting "force-systemd-env-895000" in qemu2 ...
	W0524 12:29:03.428902    3933 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:03.428935    3933 start.go:702] Will try again in 5 seconds ...
	I0524 12:29:08.431130    3933 start.go:364] acquiring machines lock for force-systemd-env-895000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:29:08.431638    3933 start.go:368] acquired machines lock for "force-systemd-env-895000" in 411.083µs
	I0524 12:29:08.431774    3933 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-895000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-895000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:29:08.432087    3933 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:29:08.440885    3933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 12:29:08.486848    3933 start.go:159] libmachine.API.Create for "force-systemd-env-895000" (driver="qemu2")
	I0524 12:29:08.486881    3933 client.go:168] LocalClient.Create starting
	I0524 12:29:08.487009    3933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:29:08.487055    3933 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:08.487078    3933 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:08.487167    3933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:29:08.487199    3933 main.go:141] libmachine: Decoding PEM data...
	I0524 12:29:08.487215    3933 main.go:141] libmachine: Parsing certificate...
	I0524 12:29:08.487726    3933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:29:08.615633    3933 main.go:141] libmachine: Creating SSH key...
	I0524 12:29:08.705628    3933 main.go:141] libmachine: Creating Disk image...
	I0524 12:29:08.705634    3933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:29:08.705800    3933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:08.714525    3933 main.go:141] libmachine: STDOUT: 
	I0524 12:29:08.714540    3933 main.go:141] libmachine: STDERR: 
	I0524 12:29:08.714607    3933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2 +20000M
	I0524 12:29:08.721708    3933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:29:08.721723    3933 main.go:141] libmachine: STDERR: 
	I0524 12:29:08.721737    3933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:08.721744    3933 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:29:08.721783    3933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:24:53:b2:b3:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/force-systemd-env-895000/disk.qcow2
	I0524 12:29:08.723313    3933 main.go:141] libmachine: STDOUT: 
	I0524 12:29:08.723327    3933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:29:08.723339    3933 client.go:171] LocalClient.Create took 236.456416ms
	I0524 12:29:10.725471    3933 start.go:128] duration metric: createHost completed in 2.293375416s
	I0524 12:29:10.725544    3933 start.go:83] releasing machines lock for "force-systemd-env-895000", held for 2.293899291s
	W0524 12:29:10.726162    3933 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:29:10.735736    3933 out.go:177] 
	W0524 12:29:10.740859    3933 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:29:10.740885    3933 out.go:239] * 
	* 
	W0524 12:29:10.743597    3933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:29:10.752700    3933 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-895000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-895000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-895000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.680291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-895000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-895000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-05-24 12:29:10.844915 -0700 PDT m=+3207.714156084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-895000 -n force-systemd-env-895000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-895000 -n force-systemd-env-895000: exit status 7 (32.108958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-895000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-895000
--- FAIL: TestForceSystemdEnv (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-097000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-097000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-kzshd" [aad40fa8-bb2f-4177-a29e-7b3ddd5d18bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-kzshd" [aad40fa8-bb2f-4177-a29e-7b3ddd5d18bb] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008334417s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.105.4:31524
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1679: failed to fetch http://192.168.105.4:31524: Get "http://192.168.105.4:31524": dial tcp 192.168.105.4:31524: connect: connection refused
functional_test.go:1596: service test failed - dumping debug information
functional_test.go:1597: -----------------------service failure post-mortem--------------------------------
functional_test.go:1600: (dbg) Run:  kubectl --context functional-097000 describe po hello-node-connect
functional_test.go:1604: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-kzshd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-097000/192.168.105.4
Start Time:       Wed, 24 May 2023 12:19:23 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://63b3f8cba615c9f2c085a69e83657b52aa347d4c364341373904c3b5ca7ec818
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 24 May 2023 12:19:41 -0700
Finished:     Wed, 24 May 2023 12:19:41 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 24 May 2023 12:19:25 -0700
Finished:     Wed, 24 May 2023 12:19:25 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rzckd (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-rzckd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-kzshd to functional-097000
Normal   Pulled     14s (x3 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 31s)  kubelet            Started container echoserver-arm
Warning  BackOff    13s (x2 over 29s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-kzshd_default(aad40fa8-bb2f-4177-a29e-7b3ddd5d18bb)

                                                
                                                
functional_test.go:1606: (dbg) Run:  kubectl --context functional-097000 logs -l app=hello-node-connect
functional_test.go:1610: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1612: (dbg) Run:  kubectl --context functional-097000 describe svc hello-node-connect
functional_test.go:1616: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.241.198
IPs:                      10.108.241.198
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31524/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-097000 -n functional-097000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh -- ls                                                                                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh sudo                                                                                         | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| mount   | -p functional-097000                                                                                               | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount1 |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-097000                                                                                               | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount2 |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-097000                                                                                               | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount3 |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|         | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-097000 ssh findmnt                                                                                      | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|         | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 12:18:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 12:18:18.558366    2623 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:18:18.558492    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:18:18.558494    2623 out.go:309] Setting ErrFile to fd 2...
	I0524 12:18:18.558496    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:18:18.558560    2623 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:18:18.559597    2623 out.go:303] Setting JSON to false
	I0524 12:18:18.575123    2623 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2869,"bootTime":1684953029,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:18:18.575177    2623 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:18:18.580786    2623 out.go:177] * [functional-097000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:18:18.588730    2623 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:18:18.588756    2623 notify.go:220] Checking for updates...
	I0524 12:18:18.594659    2623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:18:18.597680    2623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:18:18.598968    2623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:18:18.601639    2623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:18:18.604678    2623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:18:18.607875    2623 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:18:18.607896    2623 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:18:18.611667    2623 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:18:18.619532    2623 start.go:295] selected driver: qemu2
	I0524 12:18:18.619538    2623 start.go:870] validating driver "qemu2" against &{Name:functional-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-097000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:18:18.619584    2623 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:18:18.621654    2623 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:18:18.621674    2623 cni.go:84] Creating CNI manager for ""
	I0524 12:18:18.621680    2623 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:18:18.621685    2623 start_flags.go:319] config:
	{Name:functional-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-097000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:18:18.621750    2623 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:18:18.625623    2623 out.go:177] * Starting control plane node functional-097000 in cluster functional-097000
	I0524 12:18:18.633657    2623 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:18:18.633675    2623 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:18:18.633682    2623 cache.go:57] Caching tarball of preloaded images
	I0524 12:18:18.633734    2623 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:18:18.633738    2623 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:18:18.633791    2623 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/config.json ...
	I0524 12:18:18.634074    2623 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:18:18.634084    2623 start.go:364] acquiring machines lock for functional-097000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:18:18.634110    2623 start.go:368] acquired machines lock for "functional-097000" in 22.292µs
	I0524 12:18:18.634117    2623 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:18:18.634119    2623 fix.go:55] fixHost starting: 
	I0524 12:18:18.634679    2623 fix.go:103] recreateIfNeeded on functional-097000: state=Running err=<nil>
	W0524 12:18:18.634683    2623 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:18:18.643699    2623 out.go:177] * Updating the running qemu2 "functional-097000" VM ...
	I0524 12:18:18.647673    2623 machine.go:88] provisioning docker machine ...
	I0524 12:18:18.647691    2623 buildroot.go:166] provisioning hostname "functional-097000"
	I0524 12:18:18.647753    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:18.648027    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:18.648031    2623 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-097000 && echo "functional-097000" | sudo tee /etc/hostname
	I0524 12:18:18.723587    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-097000
	
	I0524 12:18:18.723636    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:18.723882    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:18.723889    2623 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-097000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-097000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-097000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 12:18:18.791380    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 12:18:18.791389    2623 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 12:18:18.791399    2623 buildroot.go:174] setting up certificates
	I0524 12:18:18.791406    2623 provision.go:83] configureAuth start
	I0524 12:18:18.791409    2623 provision.go:138] copyHostCerts
	I0524 12:18:18.791482    2623 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem, removing ...
	I0524 12:18:18.791486    2623 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem
	I0524 12:18:18.791597    2623 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 12:18:18.791769    2623 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem, removing ...
	I0524 12:18:18.791770    2623 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem
	I0524 12:18:18.791808    2623 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 12:18:18.791899    2623 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem, removing ...
	I0524 12:18:18.791901    2623 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem
	I0524 12:18:18.791937    2623 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 12:18:18.792005    2623 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.functional-097000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-097000]
	I0524 12:18:18.853764    2623 provision.go:172] copyRemoteCerts
	I0524 12:18:18.853810    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 12:18:18.853817    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:18.891733    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 12:18:18.898385    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0524 12:18:18.905399    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 12:18:18.913063    2623 provision.go:86] duration metric: configureAuth took 121.651375ms
	I0524 12:18:18.913068    2623 buildroot.go:189] setting minikube options for container-runtime
	I0524 12:18:18.913189    2623 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:18:18.913219    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:18.913444    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:18.913448    2623 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 12:18:18.981929    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 12:18:18.981936    2623 buildroot.go:70] root file system type: tmpfs
	I0524 12:18:18.981992    2623 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 12:18:18.982054    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:18.982294    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:18.982326    2623 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 12:18:19.054175    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 12:18:19.054218    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:19.054452    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:19.054458    2623 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 12:18:19.124426    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 12:18:19.124431    2623 machine.go:91] provisioned docker machine in 476.758834ms
	I0524 12:18:19.124435    2623 start.go:300] post-start starting for "functional-097000" (driver="qemu2")
	I0524 12:18:19.124438    2623 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 12:18:19.124479    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 12:18:19.124486    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:19.161209    2623 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 12:18:19.162727    2623 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 12:18:19.162733    2623 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 12:18:19.162800    2623 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 12:18:19.162898    2623 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem -> 14542.pem in /etc/ssl/certs
	I0524 12:18:19.162997    2623 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/test/nested/copy/1454/hosts -> hosts in /etc/test/nested/copy/1454
	I0524 12:18:19.163025    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1454
	I0524 12:18:19.165583    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:18:19.173232    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/test/nested/copy/1454/hosts --> /etc/test/nested/copy/1454/hosts (40 bytes)
	I0524 12:18:19.180435    2623 start.go:303] post-start completed in 55.996083ms
	I0524 12:18:19.180440    2623 fix.go:57] fixHost completed within 546.325208ms
	I0524 12:18:19.180490    2623 main.go:141] libmachine: Using SSH client type: native
	I0524 12:18:19.180730    2623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10520c6d0] 0x10520f130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0524 12:18:19.180733    2623 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 12:18:19.250522    2623 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684955899.246469251
	
	I0524 12:18:19.250527    2623 fix.go:207] guest clock: 1684955899.246469251
	I0524 12:18:19.250530    2623 fix.go:220] Guest: 2023-05-24 12:18:19.246469251 -0700 PDT Remote: 2023-05-24 12:18:19.180441 -0700 PDT m=+0.641186876 (delta=66.028251ms)
	I0524 12:18:19.250539    2623 fix.go:191] guest clock delta is within tolerance: 66.028251ms
	I0524 12:18:19.250542    2623 start.go:83] releasing machines lock for "functional-097000", held for 616.4345ms
	I0524 12:18:19.250827    2623 ssh_runner.go:195] Run: cat /version.json
	I0524 12:18:19.250832    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:19.250853    2623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 12:18:19.250872    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:19.328940    2623 ssh_runner.go:195] Run: systemctl --version
	I0524 12:18:19.330852    2623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 12:18:19.332385    2623 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 12:18:19.332409    2623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 12:18:19.335651    2623 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0524 12:18:19.335659    2623 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:18:19.335724    2623 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:18:19.345766    2623 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-097000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0524 12:18:19.345774    2623 docker.go:563] Images already preloaded, skipping extraction
	I0524 12:18:19.345778    2623 start.go:481] detecting cgroup driver to use...
	I0524 12:18:19.345826    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:18:19.351635    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 12:18:19.354604    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 12:18:19.357508    2623 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 12:18:19.357526    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 12:18:19.360908    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:18:19.364634    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 12:18:19.367504    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:18:19.370342    2623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 12:18:19.373211    2623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 12:18:19.376270    2623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 12:18:19.378960    2623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 12:18:19.381623    2623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:18:19.476369    2623 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 12:18:19.486865    2623 start.go:481] detecting cgroup driver to use...
	I0524 12:18:19.486928    2623 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 12:18:19.492494    2623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:18:19.497376    2623 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 12:18:19.504021    2623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:18:19.510539    2623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 12:18:19.515040    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:18:19.520559    2623 ssh_runner.go:195] Run: which cri-dockerd
	I0524 12:18:19.522083    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 12:18:19.524861    2623 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 12:18:19.529882    2623 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 12:18:19.620427    2623 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 12:18:19.712562    2623 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 12:18:19.712573    2623 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 12:18:19.719073    2623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:18:19.805413    2623 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:18:36.312961    2623 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.507668291s)
	I0524 12:18:36.313034    2623 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 12:18:36.422679    2623 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 12:18:36.543889    2623 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 12:18:36.719740    2623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:18:36.873303    2623 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 12:18:36.900265    2623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:18:37.023504    2623 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 12:18:37.084484    2623 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 12:18:37.084577    2623 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 12:18:37.088525    2623 start.go:549] Will wait 60s for crictl version
	I0524 12:18:37.088559    2623 ssh_runner.go:195] Run: which crictl
	I0524 12:18:37.090957    2623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 12:18:37.106144    2623 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 12:18:37.106218    2623 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:18:37.117322    2623 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:18:37.140816    2623 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 12:18:37.140908    2623 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 12:18:37.146307    2623 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0524 12:18:37.150797    2623 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:18:37.150831    2623 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:18:37.160104    2623 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-097000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0524 12:18:37.160110    2623 docker.go:563] Images already preloaded, skipping extraction
	I0524 12:18:37.160167    2623 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:18:37.167490    2623 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-097000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0524 12:18:37.167495    2623 cache_images.go:84] Images are preloaded, skipping loading
	I0524 12:18:37.167538    2623 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 12:18:37.184579    2623 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0524 12:18:37.184595    2623 cni.go:84] Creating CNI manager for ""
	I0524 12:18:37.184599    2623 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:18:37.184610    2623 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 12:18:37.184620    2623 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-097000 NodeName:functional-097000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 12:18:37.184687    2623 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-097000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 12:18:37.184721    2623 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-097000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:functional-097000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0524 12:18:37.184779    2623 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 12:18:37.188067    2623 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 12:18:37.188094    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 12:18:37.190799    2623 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0524 12:18:37.195395    2623 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 12:18:37.199960    2623 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0524 12:18:37.204589    2623 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0524 12:18:37.206034    2623 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000 for IP: 192.168.105.4
	I0524 12:18:37.206040    2623 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:18:37.206170    2623 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 12:18:37.206206    2623 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 12:18:37.206260    2623 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.key
	I0524 12:18:37.206298    2623 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/apiserver.key.942c473b
	I0524 12:18:37.206336    2623 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/proxy-client.key
	I0524 12:18:37.206489    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem (1338 bytes)
	W0524 12:18:37.206511    2623 certs.go:433] ignoring /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454_empty.pem, impossibly tiny 0 bytes
	I0524 12:18:37.206517    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 12:18:37.206537    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 12:18:37.206554    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 12:18:37.206570    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 12:18:37.206614    2623 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:18:37.206933    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 12:18:37.215139    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 12:18:37.224282    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 12:18:37.231773    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 12:18:37.239081    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 12:18:37.247895    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 12:18:37.261466    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 12:18:37.272858    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 12:18:37.283471    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 12:18:37.294553    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem --> /usr/share/ca-certificates/1454.pem (1338 bytes)
	I0524 12:18:37.303142    2623 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1708 bytes)
	I0524 12:18:37.316179    2623 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 12:18:37.321142    2623 ssh_runner.go:195] Run: openssl version
	I0524 12:18:37.323114    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1454.pem && ln -fs /usr/share/ca-certificates/1454.pem /etc/ssl/certs/1454.pem"
	I0524 12:18:37.326540    2623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1454.pem
	I0524 12:18:37.328122    2623 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 19:16 /usr/share/ca-certificates/1454.pem
	I0524 12:18:37.328137    2623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1454.pem
	I0524 12:18:37.329997    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1454.pem /etc/ssl/certs/51391683.0"
	I0524 12:18:37.332815    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I0524 12:18:37.335791    2623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I0524 12:18:37.337306    2623 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 19:16 /usr/share/ca-certificates/14542.pem
	I0524 12:18:37.337320    2623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I0524 12:18:37.339226    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 12:18:37.341935    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 12:18:37.345533    2623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:18:37.347004    2623 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:18:37.347019    2623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:18:37.348947    2623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 12:18:37.351675    2623 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 12:18:37.353051    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 12:18:37.354828    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 12:18:37.356710    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 12:18:37.358536    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 12:18:37.360393    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 12:18:37.362156    2623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 12:18:37.364029    2623 kubeadm.go:404] StartCluster: {Name:functional-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.2 ClusterName:functional-097000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:18:37.364098    2623 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 12:18:37.378639    2623 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 12:18:37.381688    2623 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 12:18:37.381695    2623 kubeadm.go:636] restartCluster start
	I0524 12:18:37.381723    2623 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 12:18:37.385005    2623 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 12:18:37.386324    2623 kubeconfig.go:92] found "functional-097000" server: "https://192.168.105.4:8441"
	I0524 12:18:37.387084    2623 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 12:18:37.390312    2623 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0524 12:18:37.390314    2623 kubeadm.go:1123] stopping kube-system containers ...
	I0524 12:18:37.390352    2623 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 12:18:37.402191    2623 docker.go:459] Stopping containers: [fec0eea3441e a43b76e168a9 0ea169d98690 96bfb181fd78 eb683a43e2ec f51909320c9b 2f9fd4cfe997 bbee5f23e28c 3733b6e1ac40 5b1c0c226d01 665e8e86a68e 784a2f80147c cfd937092164 42d7a9e406c2 d3f10436689c d57dbbe7af73 3d5ebbe24617 ac7a4ab9251b 98a855340eef 576eaa54beed c287048ea33e 54db4c648b28 09eb1f946003 28ae56c2489d 35ec86f92234 418d7c61bfa1 f155fd8d6923 1d4d4e3d25f8 d7ca6f53153a 8d60ac281a52 3dc0b838482b 0bbafb8c2104 96da0d2f259f 2c8a9795a932 c8911d2e3e34 98f1ffbaca9b d186526971d6 69be61a720a1]
	I0524 12:18:37.402257    2623 ssh_runner.go:195] Run: docker stop fec0eea3441e a43b76e168a9 0ea169d98690 96bfb181fd78 eb683a43e2ec f51909320c9b 2f9fd4cfe997 bbee5f23e28c 3733b6e1ac40 5b1c0c226d01 665e8e86a68e 784a2f80147c cfd937092164 42d7a9e406c2 d3f10436689c d57dbbe7af73 3d5ebbe24617 ac7a4ab9251b 98a855340eef 576eaa54beed c287048ea33e 54db4c648b28 09eb1f946003 28ae56c2489d 35ec86f92234 418d7c61bfa1 f155fd8d6923 1d4d4e3d25f8 d7ca6f53153a 8d60ac281a52 3dc0b838482b 0bbafb8c2104 96da0d2f259f 2c8a9795a932 c8911d2e3e34 98f1ffbaca9b d186526971d6 69be61a720a1
	I0524 12:18:37.568710    2623 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 12:18:37.649252    2623 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 12:18:37.652556    2623 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 24 19:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 24 19:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May 24 19:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May 24 19:17 /etc/kubernetes/scheduler.conf
	
	I0524 12:18:37.652581    2623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0524 12:18:37.655745    2623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0524 12:18:37.658346    2623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0524 12:18:37.661788    2623 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 12:18:37.661814    2623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0524 12:18:37.665491    2623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0524 12:18:37.669792    2623 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 12:18:37.669828    2623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0524 12:18:37.676704    2623 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 12:18:37.682122    2623 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 12:18:37.682127    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:37.704198    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:38.294672    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:38.415871    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:38.441792    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:38.472507    2623 api_server.go:52] waiting for apiserver process to appear ...
	I0524 12:18:38.472583    2623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:18:38.981063    2623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:18:39.482738    2623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:18:39.487132    2623 api_server.go:72] duration metric: took 1.014637s to wait for apiserver process to appear ...
	I0524 12:18:39.487141    2623 api_server.go:88] waiting for apiserver healthz status ...
	I0524 12:18:39.487147    2623 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0524 12:18:42.196905    2623 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0524 12:18:42.196913    2623 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0524 12:18:42.698986    2623 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0524 12:18:42.702768    2623 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 12:18:42.702774    2623 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 12:18:43.198982    2623 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0524 12:18:43.202290    2623 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 12:18:43.202297    2623 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 12:18:43.699010    2623 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0524 12:18:43.708047    2623 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0524 12:18:43.723914    2623 api_server.go:141] control plane version: v1.27.2
	I0524 12:18:43.723929    2623 api_server.go:131] duration metric: took 4.23681875s to wait for apiserver health ...
	I0524 12:18:43.723936    2623 cni.go:84] Creating CNI manager for ""
	I0524 12:18:43.723948    2623 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:18:43.728088    2623 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 12:18:43.734285    2623 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 12:18:43.745061    2623 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 12:18:43.753513    2623 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 12:18:43.759018    2623 system_pods.go:59] 7 kube-system pods found
	I0524 12:18:43.759026    2623 system_pods.go:61] "coredns-5d78c9869d-ttkkf" [79970485-ccea-485a-8887-14f60f779e72] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 12:18:43.759031    2623 system_pods.go:61] "etcd-functional-097000" [2ed7ee61-a3b1-420e-8729-7240f2e18f79] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0524 12:18:43.759034    2623 system_pods.go:61] "kube-apiserver-functional-097000" [d2c9531b-21fe-409b-a454-4d429d64a220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0524 12:18:43.759037    2623 system_pods.go:61] "kube-controller-manager-functional-097000" [dab681ae-d452-40aa-a01c-5edd6922c52e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0524 12:18:43.759046    2623 system_pods.go:61] "kube-proxy-9r29n" [db856818-dc6b-48c6-9365-08778b30373e] Running
	I0524 12:18:43.759049    2623 system_pods.go:61] "kube-scheduler-functional-097000" [0a38f024-cf4d-4037-a954-df36e2347860] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 12:18:43.759051    2623 system_pods.go:61] "storage-provisioner" [ceca1710-28c8-4200-b6aa-9c0cfeb1efc9] Running
	I0524 12:18:43.759053    2623 system_pods.go:74] duration metric: took 5.535375ms to wait for pod list to return data ...
	I0524 12:18:43.759056    2623 node_conditions.go:102] verifying NodePressure condition ...
	I0524 12:18:43.760793    2623 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 12:18:43.760800    2623 node_conditions.go:123] node cpu capacity is 2
	I0524 12:18:43.760806    2623 node_conditions.go:105] duration metric: took 1.748125ms to run NodePressure ...
	I0524 12:18:43.760812    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 12:18:43.821716    2623 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 12:18:43.823906    2623 kubeadm.go:787] kubelet initialised
	I0524 12:18:43.823910    2623 kubeadm.go:788] duration metric: took 2.187375ms waiting for restarted kubelet to initialise ...
	I0524 12:18:43.823913    2623 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:18:43.826759    2623 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:45.838951    2623 pod_ready.go:102] pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace has status "Ready":"False"
	I0524 12:18:48.342697    2623 pod_ready.go:102] pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace has status "Ready":"False"
	I0524 12:18:49.336843    2623 pod_ready.go:92] pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:49.336856    2623 pod_ready.go:81] duration metric: took 5.510136834s waiting for pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:49.336864    2623 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:51.349460    2623 pod_ready.go:102] pod "etcd-functional-097000" in "kube-system" namespace has status "Ready":"False"
	I0524 12:18:53.347697    2623 pod_ready.go:92] pod "etcd-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:53.347706    2623 pod_ready.go:81] duration metric: took 4.010870625s waiting for pod "etcd-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:53.347713    2623 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.357790    2623 pod_ready.go:92] pod "kube-apiserver-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:55.357800    2623 pod_ready.go:81] duration metric: took 2.01009825s waiting for pod "kube-apiserver-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.357806    2623 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.361657    2623 pod_ready.go:92] pod "kube-controller-manager-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:55.361660    2623 pod_ready.go:81] duration metric: took 3.850291ms waiting for pod "kube-controller-manager-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.361665    2623 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9r29n" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.369554    2623 pod_ready.go:92] pod "kube-proxy-9r29n" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:55.369559    2623 pod_ready.go:81] duration metric: took 7.890958ms waiting for pod "kube-proxy-9r29n" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.369563    2623 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.376728    2623 pod_ready.go:92] pod "kube-scheduler-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:55.376731    2623 pod_ready.go:81] duration metric: took 7.165667ms waiting for pod "kube-scheduler-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.376734    2623 pod_ready.go:38] duration metric: took 11.552912042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:18:55.376742    2623 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 12:18:55.381139    2623 ops.go:34] apiserver oom_adj: -16
	I0524 12:18:55.381142    2623 kubeadm.go:640] restartCluster took 17.999591458s
	I0524 12:18:55.381144    2623 kubeadm.go:406] StartCluster complete in 18.017264208s
	I0524 12:18:55.381151    2623 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:18:55.381239    2623 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:18:55.381603    2623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:18:55.381816    2623 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 12:18:55.381856    2623 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 12:18:55.381893    2623 addons.go:66] Setting storage-provisioner=true in profile "functional-097000"
	I0524 12:18:55.381899    2623 addons.go:228] Setting addon storage-provisioner=true in "functional-097000"
	W0524 12:18:55.381901    2623 addons.go:237] addon storage-provisioner should already be in state true
	I0524 12:18:55.381903    2623 addons.go:66] Setting default-storageclass=true in profile "functional-097000"
	I0524 12:18:55.381910    2623 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-097000"
	I0524 12:18:55.381923    2623 host.go:66] Checking if "functional-097000" exists ...
	I0524 12:18:55.381949    2623 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:18:55.387847    2623 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:18:55.390935    2623 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:18:55.390939    2623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 12:18:55.390946    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:55.391373    2623 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-097000" context rescaled to 1 replicas
	I0524 12:18:55.391384    2623 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:18:55.394681    2623 out.go:177] * Verifying Kubernetes components...
	I0524 12:18:55.393434    2623 addons.go:228] Setting addon default-storageclass=true in "functional-097000"
	W0524 12:18:55.401843    2623 addons.go:237] addon default-storageclass should already be in state true
	I0524 12:18:55.401856    2623 host.go:66] Checking if "functional-097000" exists ...
	I0524 12:18:55.401895    2623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 12:18:55.402499    2623 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 12:18:55.402502    2623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 12:18:55.402507    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
	I0524 12:18:55.429764    2623 node_ready.go:35] waiting up to 6m0s for node "functional-097000" to be "Ready" ...
	I0524 12:18:55.429783    2623 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 12:18:55.430971    2623 node_ready.go:49] node "functional-097000" has status "Ready":"True"
	I0524 12:18:55.430974    2623 node_ready.go:38] duration metric: took 1.202167ms waiting for node "functional-097000" to be "Ready" ...
	I0524 12:18:55.430976    2623 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:18:55.434334    2623 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.443027    2623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 12:18:55.443037    2623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:18:55.757348    2623 pod_ready.go:92] pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:55.757353    2623 pod_ready.go:81] duration metric: took 323.016542ms waiting for pod "coredns-5d78c9869d-ttkkf" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.757357    2623 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:55.817213    2623 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0524 12:18:55.821205    2623 addons.go:499] enable addons completed in 439.352458ms: enabled=[default-storageclass storage-provisioner]
	I0524 12:18:56.163396    2623 pod_ready.go:92] pod "etcd-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:56.163418    2623 pod_ready.go:81] duration metric: took 406.056709ms waiting for pod "etcd-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:56.163434    2623 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:56.562717    2623 pod_ready.go:92] pod "kube-apiserver-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:56.562742    2623 pod_ready.go:81] duration metric: took 399.2995ms waiting for pod "kube-apiserver-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:56.562763    2623 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:56.960614    2623 pod_ready.go:92] pod "kube-controller-manager-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:56.960626    2623 pod_ready.go:81] duration metric: took 397.857459ms waiting for pod "kube-controller-manager-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:56.960638    2623 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9r29n" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:57.360516    2623 pod_ready.go:92] pod "kube-proxy-9r29n" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:57.360531    2623 pod_ready.go:81] duration metric: took 399.889209ms waiting for pod "kube-proxy-9r29n" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:57.360543    2623 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:57.758139    2623 pod_ready.go:92] pod "kube-scheduler-functional-097000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:18:57.758147    2623 pod_ready.go:81] duration metric: took 397.600667ms waiting for pod "kube-scheduler-functional-097000" in "kube-system" namespace to be "Ready" ...
	I0524 12:18:57.758156    2623 pod_ready.go:38] duration metric: took 2.327192708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:18:57.758173    2623 api_server.go:52] waiting for apiserver process to appear ...
	I0524 12:18:57.758637    2623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:18:57.767281    2623 api_server.go:72] duration metric: took 2.375903333s to wait for apiserver process to appear ...
	I0524 12:18:57.767286    2623 api_server.go:88] waiting for apiserver healthz status ...
	I0524 12:18:57.767294    2623 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0524 12:18:57.772433    2623 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0524 12:18:57.773453    2623 api_server.go:141] control plane version: v1.27.2
	I0524 12:18:57.773459    2623 api_server.go:131] duration metric: took 6.170875ms to wait for apiserver health ...
	I0524 12:18:57.773463    2623 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 12:18:57.962650    2623 system_pods.go:59] 7 kube-system pods found
	I0524 12:18:57.962663    2623 system_pods.go:61] "coredns-5d78c9869d-ttkkf" [79970485-ccea-485a-8887-14f60f779e72] Running
	I0524 12:18:57.962668    2623 system_pods.go:61] "etcd-functional-097000" [2ed7ee61-a3b1-420e-8729-7240f2e18f79] Running
	I0524 12:18:57.962672    2623 system_pods.go:61] "kube-apiserver-functional-097000" [d2c9531b-21fe-409b-a454-4d429d64a220] Running
	I0524 12:18:57.962680    2623 system_pods.go:61] "kube-controller-manager-functional-097000" [dab681ae-d452-40aa-a01c-5edd6922c52e] Running
	I0524 12:18:57.962684    2623 system_pods.go:61] "kube-proxy-9r29n" [db856818-dc6b-48c6-9365-08778b30373e] Running
	I0524 12:18:57.962688    2623 system_pods.go:61] "kube-scheduler-functional-097000" [0a38f024-cf4d-4037-a954-df36e2347860] Running
	I0524 12:18:57.962691    2623 system_pods.go:61] "storage-provisioner" [ceca1710-28c8-4200-b6aa-9c0cfeb1efc9] Running
	I0524 12:18:57.962697    2623 system_pods.go:74] duration metric: took 189.231084ms to wait for pod list to return data ...
	I0524 12:18:57.962703    2623 default_sa.go:34] waiting for default service account to be created ...
	I0524 12:18:58.162376    2623 default_sa.go:45] found service account: "default"
	I0524 12:18:58.162403    2623 default_sa.go:55] duration metric: took 199.690584ms for default service account to be created ...
	I0524 12:18:58.162418    2623 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 12:18:58.364357    2623 system_pods.go:86] 7 kube-system pods found
	I0524 12:18:58.364370    2623 system_pods.go:89] "coredns-5d78c9869d-ttkkf" [79970485-ccea-485a-8887-14f60f779e72] Running
	I0524 12:18:58.364376    2623 system_pods.go:89] "etcd-functional-097000" [2ed7ee61-a3b1-420e-8729-7240f2e18f79] Running
	I0524 12:18:58.364382    2623 system_pods.go:89] "kube-apiserver-functional-097000" [d2c9531b-21fe-409b-a454-4d429d64a220] Running
	I0524 12:18:58.364387    2623 system_pods.go:89] "kube-controller-manager-functional-097000" [dab681ae-d452-40aa-a01c-5edd6922c52e] Running
	I0524 12:18:58.364392    2623 system_pods.go:89] "kube-proxy-9r29n" [db856818-dc6b-48c6-9365-08778b30373e] Running
	I0524 12:18:58.364398    2623 system_pods.go:89] "kube-scheduler-functional-097000" [0a38f024-cf4d-4037-a954-df36e2347860] Running
	I0524 12:18:58.364402    2623 system_pods.go:89] "storage-provisioner" [ceca1710-28c8-4200-b6aa-9c0cfeb1efc9] Running
	I0524 12:18:58.364408    2623 system_pods.go:126] duration metric: took 201.985917ms to wait for k8s-apps to be running ...
	I0524 12:18:58.364413    2623 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 12:18:58.364560    2623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 12:18:58.377254    2623 system_svc.go:56] duration metric: took 12.833834ms WaitForService to wait for kubelet.
	I0524 12:18:58.377266    2623 kubeadm.go:581] duration metric: took 2.985894375s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 12:18:58.377282    2623 node_conditions.go:102] verifying NodePressure condition ...
	I0524 12:18:58.559770    2623 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 12:18:58.563014    2623 node_conditions.go:123] node cpu capacity is 2
	I0524 12:18:58.563031    2623 node_conditions.go:105] duration metric: took 185.746333ms to run NodePressure ...
	I0524 12:18:58.563042    2623 start.go:228] waiting for startup goroutines ...
	I0524 12:18:58.563049    2623 start.go:233] waiting for cluster config update ...
	I0524 12:18:58.563059    2623 start.go:242] writing updated cluster config ...
	I0524 12:18:58.563666    2623 ssh_runner.go:195] Run: rm -f paused
	I0524 12:18:58.613757    2623 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 12:18:58.617973    2623 out.go:177] 
	W0524 12:18:58.622082    2623 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 12:18:58.624941    2623 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 12:18:58.632087    2623 out.go:177] * Done! kubectl is now configured to use "functional-097000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 19:16:54 UTC, ends at Wed 2023-05-24 19:19:55 UTC. --
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.572856630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.572801255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.572836422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.572848255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.572852755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.626422690Z" level=info msg="shim disconnected" id=63b3f8cba615c9f2c085a69e83657b52aa347d4c364341373904c3b5ca7ec818 namespace=moby
	May 24 19:19:41 functional-097000 dockerd[7867]: time="2023-05-24T19:19:41.626532982Z" level=info msg="ignoring event" container=63b3f8cba615c9f2c085a69e83657b52aa347d4c364341373904c3b5ca7ec818 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.626900941Z" level=warning msg="cleaning up after shim disconnected" id=63b3f8cba615c9f2c085a69e83657b52aa347d4c364341373904c3b5ca7ec818 namespace=moby
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.626936358Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:19:41 functional-097000 dockerd[7867]: time="2023-05-24T19:19:41.663637997Z" level=info msg="ignoring event" container=0ee7934608502f81cda28ea8f3480fc5952d1025a48b8762a7e4d692860eb661 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.663731331Z" level=info msg="shim disconnected" id=0ee7934608502f81cda28ea8f3480fc5952d1025a48b8762a7e4d692860eb661 namespace=moby
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.663756373Z" level=warning msg="cleaning up after shim disconnected" id=0ee7934608502f81cda28ea8f3480fc5952d1025a48b8762a7e4d692860eb661 namespace=moby
	May 24 19:19:41 functional-097000 dockerd[7873]: time="2023-05-24T19:19:41.663760289Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:19:43 functional-097000 dockerd[7873]: time="2023-05-24T19:19:43.513019208Z" level=info msg="shim disconnected" id=75e926cfa91152d000fe8e432e36306099fadb3de7b699150cd2652810e3dc0c namespace=moby
	May 24 19:19:43 functional-097000 dockerd[7873]: time="2023-05-24T19:19:43.513053000Z" level=warning msg="cleaning up after shim disconnected" id=75e926cfa91152d000fe8e432e36306099fadb3de7b699150cd2652810e3dc0c namespace=moby
	May 24 19:19:43 functional-097000 dockerd[7873]: time="2023-05-24T19:19:43.513058250Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:19:43 functional-097000 dockerd[7867]: time="2023-05-24T19:19:43.513297001Z" level=info msg="ignoring event" container=75e926cfa91152d000fe8e432e36306099fadb3de7b699150cd2652810e3dc0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.579179867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.579251743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.579276868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.579294202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:48 functional-097000 dockerd[7867]: time="2023-05-24T19:19:48.622640687Z" level=info msg="ignoring event" container=6f8ee324f13a9a7150dc8530703fe1135ad05c9aa7a5ae2f27068a562d546c21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.622789022Z" level=info msg="shim disconnected" id=6f8ee324f13a9a7150dc8530703fe1135ad05c9aa7a5ae2f27068a562d546c21 namespace=moby
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.622824356Z" level=warning msg="cleaning up after shim disconnected" id=6f8ee324f13a9a7150dc8530703fe1135ad05c9aa7a5ae2f27068a562d546c21 namespace=moby
	May 24 19:19:48 functional-097000 dockerd[7873]: time="2023-05-24T19:19:48.622828189Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	6f8ee324f13a9       72565bf5bbedf                                                                                         7 seconds ago        Exited              echoserver-arm            3                   5eeec8ead8823
	0ee7934608502       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   14 seconds ago       Exited              mount-munger              0                   75e926cfa9115
	63b3f8cba615c       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   f8ef9ce712639
	5385c092bd235       nginx@sha256:f5747a42e3adcb3168049d63278d7251d91185bb5111d2563d58729a5c9179b0                         24 seconds ago       Running             myfrontend                0                   17e8d78cc346b
	a4346fa4c362b       nginx@sha256:02ffd439b71d9ea9408e449b568f65c0bbbb94bebd8750f1d80231ab6496008e                         39 seconds ago       Running             nginx                     0                   dc69cbded1e55
	7b0aec6d6539b       97e04611ad434                                                                                         About a minute ago   Running             coredns                   3                   f1884a9621ba3
	15c5d6ae7b678       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   545f0f7cb9ee1
	eecf6d53d4465       29921a0845422                                                                                         About a minute ago   Running             kube-proxy                3                   301abd77b250a
	696b71d60a003       72c9df6be7f1b                                                                                         About a minute ago   Running             kube-apiserver            0                   4fedf6d967599
	938b128d9477b       24bc64e911039                                                                                         About a minute ago   Running             etcd                      3                   7852d0dbf404b
	51fc58b0f7960       2ee705380c3c5                                                                                         About a minute ago   Running             kube-controller-manager   3                   1053e243903f3
	dcfd7a1fcd6c5       305d7ed1dae28                                                                                         About a minute ago   Running             kube-scheduler            3                   e0a438d1fa21c
	fec0eea3441e3       2ee705380c3c5                                                                                         About a minute ago   Exited              kube-controller-manager   2                   f51909320c9bb
	a43b76e168a97       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   2f9fd4cfe9970
	3733b6e1ac40b       305d7ed1dae28                                                                                         About a minute ago   Created             kube-scheduler            2                   d57dbbe7af730
	3d5ebbe246171       29921a0845422                                                                                         About a minute ago   Exited              kube-proxy                2                   418d7c61bfa11
	576eaa54beed4       24bc64e911039                                                                                         2 minutes ago        Exited              etcd                      2                   09eb1f9460036
	c287048ea33e1       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   2                   54db4c648b28d
	
	* 
	* ==> coredns [7b0aec6d6539] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42759 - 35544 "HINFO IN 3942871017208658869.775228238536521380. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.00885057s
	[INFO] 10.244.0.1:17161 - 12716 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000096746s
	[INFO] 10.244.0.1:42023 - 33521 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000111871s
	[INFO] 10.244.0.1:14412 - 19967 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000034957s
	[INFO] 10.244.0.1:3537 - 36602 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001043006s
	[INFO] 10.244.0.1:22226 - 35733 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000062248s
	[INFO] 10.244.0.1:55728 - 61604 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000082247s
	
	* 
	* ==> coredns [c287048ea33e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52641 - 59692 "HINFO IN 6365829362095346771.6905281848395351074. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010088007s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-097000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-097000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=functional-097000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T12_17_13_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-097000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:19:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:19:43 +0000   Wed, 24 May 2023 19:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:19:43 +0000   Wed, 24 May 2023 19:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:19:43 +0000   Wed, 24 May 2023 19:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:19:43 +0000   Wed, 24 May 2023 19:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-097000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a73bf34238044ed9240aa9b1daade50
	  System UUID:                3a73bf34238044ed9240aa9b1daade50
	  Boot ID:                    fc9a264d-0255-48b9-a36f-c2ac6c01d0a3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-qssp9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  default                     hello-node-connect-58d66798bb-kzshd          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 coredns-5d78c9869d-ttkkf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m31s
	  kube-system                 etcd-functional-097000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-apiserver-functional-097000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-functional-097000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-proxy-9r29n                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-functional-097000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m29s                  kube-proxy       
	  Normal  Starting                 73s                    kube-proxy       
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m49s (x8 over 2m49s)  kubelet          Node functional-097000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m49s (x8 over 2m49s)  kubelet          Node functional-097000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m49s (x7 over 2m49s)  kubelet          Node functional-097000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m44s                  kubelet          Node functional-097000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m44s                  kubelet          Node functional-097000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s                  kubelet          Node functional-097000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m42s                  kubelet          Node functional-097000 status is now: NodeReady
	  Normal  RegisteredNode           2m32s                  node-controller  Node functional-097000 event: Registered Node functional-097000 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node functional-097000 event: Registered Node functional-097000 in Controller
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-097000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-097000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 78s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-097000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                    node-controller  Node functional-097000 event: Registered Node functional-097000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.096734] systemd-fstab-generator[3713]: Ignoring "noauto" for root device
	[  +0.108931] systemd-fstab-generator[3726]: Ignoring "noauto" for root device
	[  +1.511497] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.701546] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.199717] systemd-fstab-generator[4993]: Ignoring "noauto" for root device
	[  +0.103113] systemd-fstab-generator[5102]: Ignoring "noauto" for root device
	[  +0.100391] systemd-fstab-generator[5208]: Ignoring "noauto" for root device
	[  +0.124633] systemd-fstab-generator[5304]: Ignoring "noauto" for root device
	[  +0.132915] systemd-fstab-generator[5427]: Ignoring "noauto" for root device
	[ +12.520636] kauditd_printk_skb: 51 callbacks suppressed
	[May24 19:18] systemd-fstab-generator[7134]: Ignoring "noauto" for root device
	[  +0.140963] systemd-fstab-generator[7167]: Ignoring "noauto" for root device
	[  +0.097655] systemd-fstab-generator[7178]: Ignoring "noauto" for root device
	[  +0.090489] systemd-fstab-generator[7191]: Ignoring "noauto" for root device
	[ +16.615971] systemd-fstab-generator[8296]: Ignoring "noauto" for root device
	[  +0.107215] systemd-fstab-generator[8393]: Ignoring "noauto" for root device
	[  +0.126430] systemd-fstab-generator[8593]: Ignoring "noauto" for root device
	[  +0.186589] systemd-fstab-generator[8649]: Ignoring "noauto" for root device
	[  +0.171971] systemd-fstab-generator[8852]: Ignoring "noauto" for root device
	[  +1.411218] systemd-fstab-generator[9388]: Ignoring "noauto" for root device
	[  +4.417196] kauditd_printk_skb: 67 callbacks suppressed
	[ +11.667573] kauditd_printk_skb: 5 callbacks suppressed
	[May24 19:19] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +16.720240] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.242681] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [576eaa54beed] <==
	* {"level":"info","ts":"2023-05-24T19:17:54.214Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-05-24T19:17:54.214Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:17:54.214Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:17:54.214Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:17:54.214Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:17:56.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-05-24T19:17:56.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-05-24T19:17:56.119Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:17:56.119Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:17:56.121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T19:17:56.121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T19:17:56.119Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-097000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T19:17:56.122Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T19:17:56.122Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-05-24T19:18:19.848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-05-24T19:18:19.848Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-097000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-05-24T19:18:19.856Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-05-24T19:18:19.857Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:18:19.858Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:18:19.858Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-097000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [938b128d9477] <==
	* {"level":"info","ts":"2023-05-24T19:18:39.661Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T19:18:39.661Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:18:39.664Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-24T19:18:39.662Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 4"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 5"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 5"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 5"}
	{"level":"info","ts":"2023-05-24T19:18:41.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 5"}
	{"level":"info","ts":"2023-05-24T19:18:41.550Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-097000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T19:18:41.550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T19:18:41.550Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T19:18:41.550Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:18:41.550Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:18:41.553Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T19:18:41.553Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	
	* 
	* ==> kernel <==
	*  19:19:56 up 3 min,  0 users,  load average: 1.10, 0.55, 0.22
	Linux functional-097000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [696b71d60a00] <==
	* I0524 19:18:42.197014       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0524 19:18:42.184024       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0524 19:18:42.243220       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:18:42.249184       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 19:18:42.288038       1 cache.go:39] Caches are synced for autoregister controller
	I0524 19:18:42.288132       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 19:18:42.288062       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 19:18:42.288209       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 19:18:42.288068       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0524 19:18:42.288605       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 19:18:42.296558       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 19:18:42.297644       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 19:18:43.056530       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:18:43.185386       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 19:18:43.794838       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 19:18:43.798059       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 19:18:43.809390       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0524 19:18:43.817098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:18:43.819317       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 19:18:54.452003       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 19:18:54.453539       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 19:19:01.510432       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0524 19:19:01.555035       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.109.14.214]
	I0524 19:19:13.197884       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.103.164.49]
	I0524 19:19:23.827656       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.108.241.198]
	
	* 
	* ==> kube-controller-manager [51fc58b0f796] <==
	* I0524 19:18:54.438002       1 shared_informer.go:318] Caches are synced for PV protection
	I0524 19:18:54.439119       1 shared_informer.go:318] Caches are synced for deployment
	I0524 19:18:54.442469       1 shared_informer.go:318] Caches are synced for stateful set
	I0524 19:18:54.446431       1 shared_informer.go:318] Caches are synced for endpoint
	I0524 19:18:54.447208       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0524 19:18:54.448660       1 shared_informer.go:318] Caches are synced for job
	I0524 19:18:54.452620       1 shared_informer.go:318] Caches are synced for HPA
	I0524 19:18:54.454567       1 shared_informer.go:318] Caches are synced for GC
	I0524 19:18:54.464260       1 shared_informer.go:318] Caches are synced for expand
	I0524 19:18:54.477553       1 shared_informer.go:318] Caches are synced for ephemeral
	I0524 19:18:54.492850       1 shared_informer.go:318] Caches are synced for daemon sets
	I0524 19:18:54.497023       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0524 19:18:54.502202       1 shared_informer.go:318] Caches are synced for persistent volume
	I0524 19:18:54.510428       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 19:18:54.512548       1 shared_informer.go:318] Caches are synced for disruption
	I0524 19:18:54.514248       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 19:18:54.526228       1 shared_informer.go:318] Caches are synced for attach detach
	I0524 19:18:54.933525       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 19:18:54.981824       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 19:18:54.981942       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0524 19:19:01.512351       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0524 19:19:01.521235       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-qssp9"
	I0524 19:19:19.256641       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0524 19:19:23.782883       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0524 19:19:23.784963       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-kzshd"
	
	* 
	* ==> kube-controller-manager [fec0eea3441e] <==
	* I0524 19:18:37.374740       1 serving.go:348] Generated self-signed cert in-memory
	
	* 
	* ==> kube-proxy [3d5ebbe24617] <==
	* I0524 19:18:00.723684       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0524 19:18:00.723713       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0524 19:18:00.723722       1 server_others.go:551] "Using iptables proxy"
	I0524 19:18:00.731273       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 19:18:00.731285       1 server_others.go:190] "Using iptables Proxier"
	I0524 19:18:00.731298       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 19:18:00.731459       1 server.go:657] "Version info" version="v1.27.2"
	I0524 19:18:00.731467       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:18:00.731707       1 config.go:188] "Starting service config controller"
	I0524 19:18:00.731713       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 19:18:00.731721       1 config.go:97] "Starting endpoint slice config controller"
	I0524 19:18:00.731723       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 19:18:00.731965       1 config.go:315] "Starting node config controller"
	I0524 19:18:00.731968       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 19:18:00.833298       1 shared_informer.go:318] Caches are synced for node config
	I0524 19:18:00.833327       1 shared_informer.go:318] Caches are synced for service config
	I0524 19:18:00.833332       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [eecf6d53d446] <==
	* I0524 19:18:43.049278       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0524 19:18:43.049305       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0524 19:18:43.049315       1 server_others.go:551] "Using iptables proxy"
	I0524 19:18:43.062132       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 19:18:43.062146       1 server_others.go:190] "Using iptables Proxier"
	I0524 19:18:43.062166       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 19:18:43.062424       1 server.go:657] "Version info" version="v1.27.2"
	I0524 19:18:43.062432       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:18:43.063823       1 config.go:315] "Starting node config controller"
	I0524 19:18:43.063829       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 19:18:43.066904       1 config.go:97] "Starting endpoint slice config controller"
	I0524 19:18:43.066911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 19:18:43.066945       1 config.go:188] "Starting service config controller"
	I0524 19:18:43.066947       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 19:18:43.067275       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 19:18:43.164455       1 shared_informer.go:318] Caches are synced for node config
	I0524 19:18:43.167598       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [3733b6e1ac40] <==
	* 
	* 
	* ==> kube-scheduler [dcfd7a1fcd6c] <==
	* I0524 19:18:39.660995       1 serving.go:348] Generated self-signed cert in-memory
	W0524 19:18:42.220660       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0524 19:18:42.220749       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 19:18:42.220780       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0524 19:18:42.220800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0524 19:18:42.243506       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0524 19:18:42.243519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:18:42.244389       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0524 19:18:42.244788       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0524 19:18:42.244795       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:18:42.244805       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0524 19:18:42.345752       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:16:54 UTC, ends at Wed 2023-05-24 19:19:56 UTC. --
	May 24 19:19:34 functional-097000 kubelet[9394]: I0524 19:19:34.504856    9394 scope.go:115] "RemoveContainer" containerID="3b5e61cd645780f868e053115f774e8dbff86ba3f56260ca8cf35c15ea0b7840"
	May 24 19:19:34 functional-097000 kubelet[9394]: E0524 19:19:34.505361    9394 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-qssp9_default(48948cbf-243f-4fde-be59-4200f9512da0)\"" pod="default/hello-node-7b684b55f9-qssp9" podUID=48948cbf-243f-4fde-be59-4200f9512da0
	May 24 19:19:34 functional-097000 kubelet[9394]: I0524 19:19:34.538898    9394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.879997069 podCreationTimestamp="2023-05-24 19:19:30 +0000 UTC" firstStartedPulling="2023-05-24 19:19:30.866175338 +0000 UTC m=+52.451872329" lastFinishedPulling="2023-05-24 19:19:31.524973517 +0000 UTC m=+53.110670549" observedRunningTime="2023-05-24 19:19:32.29305148 +0000 UTC m=+53.878748512" watchObservedRunningTime="2023-05-24 19:19:34.538795289 +0000 UTC m=+56.124492321"
	May 24 19:19:38 functional-097000 kubelet[9394]: E0524 19:19:38.510451    9394 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:19:38 functional-097000 kubelet[9394]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:19:38 functional-097000 kubelet[9394]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:19:38 functional-097000 kubelet[9394]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:19:39 functional-097000 kubelet[9394]: I0524 19:19:39.215424    9394 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:19:39 functional-097000 kubelet[9394]: I0524 19:19:39.216199    9394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htnsv\" (UniqueName: \"kubernetes.io/projected/1422403e-f132-472f-95f2-6c52d50c03a1-kube-api-access-htnsv\") pod \"busybox-mount\" (UID: \"1422403e-f132-472f-95f2-6c52d50c03a1\") " pod="default/busybox-mount"
	May 24 19:19:39 functional-097000 kubelet[9394]: I0524 19:19:39.216222    9394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1422403e-f132-472f-95f2-6c52d50c03a1-test-volume\") pod \"busybox-mount\" (UID: \"1422403e-f132-472f-95f2-6c52d50c03a1\") " pod="default/busybox-mount"
	May 24 19:19:41 functional-097000 kubelet[9394]: I0524 19:19:41.502518    9394 scope.go:115] "RemoveContainer" containerID="dd3a739d970de0dc419c6ab6b3e4ce0549f768b27dc27c4f8c274b8a6760acfb"
	May 24 19:19:42 functional-097000 kubelet[9394]: I0524 19:19:42.411754    9394 scope.go:115] "RemoveContainer" containerID="dd3a739d970de0dc419c6ab6b3e4ce0549f768b27dc27c4f8c274b8a6760acfb"
	May 24 19:19:42 functional-097000 kubelet[9394]: I0524 19:19:42.412108    9394 scope.go:115] "RemoveContainer" containerID="63b3f8cba615c9f2c085a69e83657b52aa347d4c364341373904c3b5ca7ec818"
	May 24 19:19:42 functional-097000 kubelet[9394]: E0524 19:19:42.413118    9394 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-kzshd_default(aad40fa8-bb2f-4177-a29e-7b3ddd5d18bb)\"" pod="default/hello-node-connect-58d66798bb-kzshd" podUID=aad40fa8-bb2f-4177-a29e-7b3ddd5d18bb
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.659579    9394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htnsv\" (UniqueName: \"kubernetes.io/projected/1422403e-f132-472f-95f2-6c52d50c03a1-kube-api-access-htnsv\") pod \"1422403e-f132-472f-95f2-6c52d50c03a1\" (UID: \"1422403e-f132-472f-95f2-6c52d50c03a1\") "
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.659612    9394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1422403e-f132-472f-95f2-6c52d50c03a1-test-volume\") pod \"1422403e-f132-472f-95f2-6c52d50c03a1\" (UID: \"1422403e-f132-472f-95f2-6c52d50c03a1\") "
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.659648    9394 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1422403e-f132-472f-95f2-6c52d50c03a1-test-volume" (OuterVolumeSpecName: "test-volume") pod "1422403e-f132-472f-95f2-6c52d50c03a1" (UID: "1422403e-f132-472f-95f2-6c52d50c03a1"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.663019    9394 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1422403e-f132-472f-95f2-6c52d50c03a1-kube-api-access-htnsv" (OuterVolumeSpecName: "kube-api-access-htnsv") pod "1422403e-f132-472f-95f2-6c52d50c03a1" (UID: "1422403e-f132-472f-95f2-6c52d50c03a1"). InnerVolumeSpecName "kube-api-access-htnsv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.760240    9394 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-htnsv\" (UniqueName: \"kubernetes.io/projected/1422403e-f132-472f-95f2-6c52d50c03a1-kube-api-access-htnsv\") on node \"functional-097000\" DevicePath \"\""
	May 24 19:19:43 functional-097000 kubelet[9394]: I0524 19:19:43.760275    9394 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1422403e-f132-472f-95f2-6c52d50c03a1-test-volume\") on node \"functional-097000\" DevicePath \"\""
	May 24 19:19:44 functional-097000 kubelet[9394]: I0524 19:19:44.434154    9394 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e926cfa91152d000fe8e432e36306099fadb3de7b699150cd2652810e3dc0c"
	May 24 19:19:48 functional-097000 kubelet[9394]: I0524 19:19:48.505287    9394 scope.go:115] "RemoveContainer" containerID="3b5e61cd645780f868e053115f774e8dbff86ba3f56260ca8cf35c15ea0b7840"
	May 24 19:19:49 functional-097000 kubelet[9394]: I0524 19:19:49.512535    9394 scope.go:115] "RemoveContainer" containerID="3b5e61cd645780f868e053115f774e8dbff86ba3f56260ca8cf35c15ea0b7840"
	May 24 19:19:49 functional-097000 kubelet[9394]: I0524 19:19:49.513418    9394 scope.go:115] "RemoveContainer" containerID="6f8ee324f13a9a7150dc8530703fe1135ad05c9aa7a5ae2f27068a562d546c21"
	May 24 19:19:49 functional-097000 kubelet[9394]: E0524 19:19:49.513687    9394 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-qssp9_default(48948cbf-243f-4fde-be59-4200f9512da0)\"" pod="default/hello-node-7b684b55f9-qssp9" podUID=48948cbf-243f-4fde-be59-4200f9512da0
	
	* 
	* ==> storage-provisioner [15c5d6ae7b67] <==
	* I0524 19:18:43.083449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 19:18:43.090110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 19:18:43.090142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 19:19:00.482038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 19:19:00.482110       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-097000_bf3e2605-d68f-4098-ae0e-fbb7d9ddfca9!
	I0524 19:19:00.482449       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb456913-de61-4d56-8ce8-ede6ecc711a1", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-097000_bf3e2605-d68f-4098-ae0e-fbb7d9ddfca9 became leader
	I0524 19:19:00.582562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-097000_bf3e2605-d68f-4098-ae0e-fbb7d9ddfca9!
	I0524 19:19:19.257074       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0524 19:19:19.257169       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e27bbbf8-c71b-43d5-89e5-00698436b765 392 0 2023-05-24 19:17:26 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-05-24 19:17:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-71ce3e58-ec8d-4c68-823b-b044ac5a321f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  71ce3e58-ec8d-4c68-823b-b044ac5a321f 709 0 2023-05-24 19:19:19 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-05-24 19:19:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-05-24 19:19:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0524 19:19:19.257773       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"71ce3e58-ec8d-4c68-823b-b044ac5a321f", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0524 19:19:19.257903       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-71ce3e58-ec8d-4c68-823b-b044ac5a321f" provisioned
	I0524 19:19:19.257925       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0524 19:19:19.257961       1 volume_store.go:212] Trying to save persistentvolume "pvc-71ce3e58-ec8d-4c68-823b-b044ac5a321f"
	I0524 19:19:19.262964       1 volume_store.go:219] persistentvolume "pvc-71ce3e58-ec8d-4c68-823b-b044ac5a321f" saved
	I0524 19:19:19.263085       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"71ce3e58-ec8d-4c68-823b-b044ac5a321f", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-71ce3e58-ec8d-4c68-823b-b044ac5a321f
	
	* 
	* ==> storage-provisioner [a43b76e168a9] <==
	* I0524 19:18:36.935917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0524 19:18:36.938901       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-097000 -n functional-097000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-097000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-097000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-097000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-097000/192.168.105.4
	Start Time:       Wed, 24 May 2023 12:19:39 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  mount-munger:
	    Container ID:  docker://0ee7934608502f81cda28ea8f3480fc5952d1025a48b8762a7e4d692860eb661
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 24 May 2023 12:19:41 -0700
	      Finished:     Wed, 24 May 2023 12:19:41 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htnsv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-htnsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  17s   default-scheduler  Successfully assigned default/busybox-mount to functional-097000
	  Normal  Pulling    17s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.800495663s (1.800499829s including waiting)
	  Normal  Created    15s   kubelet            Created container mount-munger
	  Normal  Started    15s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-594000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-594000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 9bf039f621fa
	Removing intermediate container 9bf039f621fa
	 ---> e9d98666ff4d
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1af7b7123ae8
	Removing intermediate container 1af7b7123ae8
	 ---> 5cfe6555b2a1
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1acd4219c473
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-594000 -n image-594000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-594000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:19 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| start          | -p functional-097000                     | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-097000 --dry-run           | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-097000                     | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:19 PDT | 24 May 23 12:20 PDT |
	|                | -p functional-097000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh findmnt            | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-097000 ssh pgrep              | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-097000 image build -t         | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | localhost/my-image:functional-097000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-097000 image ls               | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	| image          | functional-097000                        | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| delete         | -p functional-097000                     | functional-097000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	| start          | -p image-594000 --driver=qemu2           | image-594000      | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000      | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-594000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000      | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-594000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 12:20:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 12:20:04.486412    3028 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:20:04.486553    3028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:20:04.486554    3028 out.go:309] Setting ErrFile to fd 2...
	I0524 12:20:04.486556    3028 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:20:04.486620    3028 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:20:04.487783    3028 out.go:303] Setting JSON to false
	I0524 12:20:04.504159    3028 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2975,"bootTime":1684953029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:20:04.504216    3028 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:20:04.507946    3028 out.go:177] * [image-594000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:20:04.516099    3028 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:20:04.516141    3028 notify.go:220] Checking for updates...
	I0524 12:20:04.521999    3028 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:20:04.525034    3028 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:20:04.533028    3028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:20:04.536039    3028 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:20:04.539050    3028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:20:04.542211    3028 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:20:04.546012    3028 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:20:04.552954    3028 start.go:295] selected driver: qemu2
	I0524 12:20:04.552959    3028 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:20:04.552966    3028 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:20:04.553019    3028 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:20:04.556004    3028 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:20:04.561344    3028 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0524 12:20:04.561429    3028 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 12:20:04.561445    3028 cni.go:84] Creating CNI manager for ""
	I0524 12:20:04.561453    3028 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:20:04.561456    3028 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:20:04.561463    3028 start_flags.go:319] config:
	{Name:image-594000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:image-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:20:04.561534    3028 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:20:04.569007    3028 out.go:177] * Starting control plane node image-594000 in cluster image-594000
	I0524 12:20:04.572952    3028 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:20:04.572971    3028 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:20:04.572980    3028 cache.go:57] Caching tarball of preloaded images
	I0524 12:20:04.573037    3028 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:20:04.573040    3028 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:20:04.573220    3028 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/config.json ...
	I0524 12:20:04.573232    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/config.json: {Name:mk2df99c9b6dbf0990f8d870b80e0e1e760b0d2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:04.573434    3028 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:20:04.573446    3028 start.go:364] acquiring machines lock for image-594000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:20:04.573475    3028 start.go:368] acquired machines lock for "image-594000" in 25.5µs
	I0524 12:20:04.573486    3028 start.go:93] Provisioning new machine with config: &{Name:image-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:image-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:20:04.573516    3028 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:20:04.580890    3028 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0524 12:20:04.603390    3028 start.go:159] libmachine.API.Create for "image-594000" (driver="qemu2")
	I0524 12:20:04.603417    3028 client.go:168] LocalClient.Create starting
	I0524 12:20:04.603470    3028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:20:04.603619    3028 main.go:141] libmachine: Decoding PEM data...
	I0524 12:20:04.603627    3028 main.go:141] libmachine: Parsing certificate...
	I0524 12:20:04.603668    3028 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:20:04.603754    3028 main.go:141] libmachine: Decoding PEM data...
	I0524 12:20:04.603758    3028 main.go:141] libmachine: Parsing certificate...
	I0524 12:20:04.604028    3028 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:20:04.967146    3028 main.go:141] libmachine: Creating SSH key...
	I0524 12:20:05.108777    3028 main.go:141] libmachine: Creating Disk image...
	I0524 12:20:05.108783    3028 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:20:05.108947    3028 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2
	I0524 12:20:05.182841    3028 main.go:141] libmachine: STDOUT: 
	I0524 12:20:05.182857    3028 main.go:141] libmachine: STDERR: 
	I0524 12:20:05.182938    3028 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2 +20000M
	I0524 12:20:05.190348    3028 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:20:05.190359    3028 main.go:141] libmachine: STDERR: 
	I0524 12:20:05.190382    3028 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2
	I0524 12:20:05.190387    3028 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:20:05.190423    3028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:af:ab:50:07:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/disk.qcow2
	I0524 12:20:05.246893    3028 main.go:141] libmachine: STDOUT: 
	I0524 12:20:05.246918    3028 main.go:141] libmachine: STDERR: 
	I0524 12:20:05.246922    3028 main.go:141] libmachine: Attempt 0
	I0524 12:20:05.246943    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:05.250971    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:05.250998    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:05.251010    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:05.251014    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:07.253154    3028 main.go:141] libmachine: Attempt 1
	I0524 12:20:07.253210    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:07.253576    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:07.253615    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:07.253680    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:07.253744    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:09.255881    3028 main.go:141] libmachine: Attempt 2
	I0524 12:20:09.255899    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:09.256035    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:09.256046    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:09.256050    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:09.256054    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:11.258096    3028 main.go:141] libmachine: Attempt 3
	I0524 12:20:11.258100    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:11.258281    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:11.258299    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:11.258311    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:11.258315    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:13.260316    3028 main.go:141] libmachine: Attempt 4
	I0524 12:20:13.260320    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:13.260357    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:13.260362    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:13.260366    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:13.260370    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:15.261257    3028 main.go:141] libmachine: Attempt 5
	I0524 12:20:15.261269    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:15.261369    3028 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0524 12:20:15.261376    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:15.261381    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:15.261385    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:17.261540    3028 main.go:141] libmachine: Attempt 6
	I0524 12:20:17.261550    3028 main.go:141] libmachine: Searching for 7a:af:ab:50:7:7c in /var/db/dhcpd_leases ...
	I0524 12:20:17.261619    3028 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:17.261626    3028 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:17.261629    3028 main.go:141] libmachine: Found match: 7a:af:ab:50:7:7c
	I0524 12:20:17.261637    3028 main.go:141] libmachine: IP: 192.168.105.5
	I0524 12:20:17.261641    3028 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0524 12:20:19.281368    3028 machine.go:88] provisioning docker machine ...
	I0524 12:20:19.281428    3028 buildroot.go:166] provisioning hostname "image-594000"
	I0524 12:20:19.281677    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:19.282652    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:19.282667    3028 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-594000 && echo "image-594000" | sudo tee /etc/hostname
	I0524 12:20:19.380315    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: image-594000
	
	I0524 12:20:19.380429    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:19.380926    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:19.380939    3028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-594000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-594000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-594000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 12:20:19.457270    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 12:20:19.457284    3028 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 12:20:19.457295    3028 buildroot.go:174] setting up certificates
	I0524 12:20:19.457308    3028 provision.go:83] configureAuth start
	I0524 12:20:19.457313    3028 provision.go:138] copyHostCerts
	I0524 12:20:19.457445    3028 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem, removing ...
	I0524 12:20:19.457471    3028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem
	I0524 12:20:19.457652    3028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 12:20:19.457989    3028 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem, removing ...
	I0524 12:20:19.457992    3028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem
	I0524 12:20:19.458052    3028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 12:20:19.458234    3028 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem, removing ...
	I0524 12:20:19.458237    3028 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem
	I0524 12:20:19.458294    3028 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 12:20:19.458730    3028 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.image-594000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-594000]
	I0524 12:20:19.547322    3028 provision.go:172] copyRemoteCerts
	I0524 12:20:19.547367    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 12:20:19.547372    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:19.581742    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 12:20:19.589420    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0524 12:20:19.595690    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 12:20:19.602535    3028 provision.go:86] duration metric: configureAuth took 145.221541ms
	I0524 12:20:19.602540    3028 buildroot.go:189] setting minikube options for container-runtime
	I0524 12:20:19.602631    3028 config.go:182] Loaded profile config "image-594000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:20:19.602658    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:19.602866    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:19.602869    3028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 12:20:19.668500    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 12:20:19.668504    3028 buildroot.go:70] root file system type: tmpfs
	I0524 12:20:19.668566    3028 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 12:20:19.668616    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:19.668864    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:19.668901    3028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 12:20:19.737234    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 12:20:19.737283    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:19.737565    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:19.737574    3028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 12:20:20.087112    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 12:20:20.087120    3028 machine.go:91] provisioned docker machine in 805.745208ms
	I0524 12:20:20.087125    3028 client.go:171] LocalClient.Create took 15.483832125s
	I0524 12:20:20.087146    3028 start.go:167] duration metric: libmachine.API.Create for "image-594000" took 15.483882875s
	I0524 12:20:20.087149    3028 start.go:300] post-start starting for "image-594000" (driver="qemu2")
	I0524 12:20:20.087151    3028 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 12:20:20.087212    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 12:20:20.087220    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:20.124048    3028 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 12:20:20.127550    3028 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 12:20:20.127558    3028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 12:20:20.127629    3028 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 12:20:20.127736    3028 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem -> 14542.pem in /etc/ssl/certs
	I0524 12:20:20.127845    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 12:20:20.130561    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:20:20.137553    3028 start.go:303] post-start completed in 50.399208ms
	I0524 12:20:20.137974    3028 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/config.json ...
	I0524 12:20:20.138130    3028 start.go:128] duration metric: createHost completed in 15.5647375s
	I0524 12:20:20.138166    3028 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:20.138393    3028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b46d0] 0x1012b7130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0524 12:20:20.138396    3028 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 12:20:20.200478    3028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684956020.551213544
	
	I0524 12:20:20.200484    3028 fix.go:207] guest clock: 1684956020.551213544
	I0524 12:20:20.200487    3028 fix.go:220] Guest: 2023-05-24 12:20:20.551213544 -0700 PDT Remote: 2023-05-24 12:20:20.138136 -0700 PDT m=+15.674100126 (delta=413.077544ms)
	I0524 12:20:20.200497    3028 fix.go:191] guest clock delta is within tolerance: 413.077544ms
	I0524 12:20:20.200499    3028 start.go:83] releasing machines lock for "image-594000", held for 15.62714875s
	I0524 12:20:20.200785    3028 ssh_runner.go:195] Run: cat /version.json
	I0524 12:20:20.200795    3028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 12:20:20.200793    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:20.200814    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:20.279165    3028 ssh_runner.go:195] Run: systemctl --version
	I0524 12:20:20.281567    3028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 12:20:20.283552    3028 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 12:20:20.283580    3028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 12:20:20.288963    3028 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 12:20:20.288971    3028 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:20:20.289042    3028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:20:20.299721    3028 docker.go:633] Got preloaded images: 
	I0524 12:20:20.299725    3028 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 12:20:20.299765    3028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:20:20.302767    3028 ssh_runner.go:195] Run: which lz4
	I0524 12:20:20.304083    3028 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 12:20:20.305381    3028 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 12:20:20.305392    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0524 12:20:21.592089    3028 docker.go:597] Took 1.288066 seconds to copy over tarball
	I0524 12:20:21.592151    3028 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 12:20:22.623347    3028 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.031186584s)
	I0524 12:20:22.623356    3028 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 12:20:22.638869    3028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:20:22.642389    3028 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 12:20:22.647621    3028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:20:22.724446    3028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:20:24.184934    3028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.460487459s)
	I0524 12:20:24.184963    3028 start.go:481] detecting cgroup driver to use...
	I0524 12:20:24.185032    3028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:20:24.192267    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 12:20:24.195258    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 12:20:24.198368    3028 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 12:20:24.198385    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 12:20:24.203085    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:20:24.206707    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 12:20:24.210086    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:20:24.213496    3028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 12:20:24.216483    3028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 12:20:24.219333    3028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 12:20:24.222755    3028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 12:20:24.225898    3028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:20:24.308227    3028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 12:20:24.316724    3028 start.go:481] detecting cgroup driver to use...
	I0524 12:20:24.316783    3028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 12:20:24.322102    3028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:20:24.327319    3028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 12:20:24.333675    3028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:20:24.338394    3028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 12:20:24.343343    3028 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 12:20:24.400885    3028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 12:20:24.406454    3028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:20:24.412398    3028 ssh_runner.go:195] Run: which cri-dockerd
	I0524 12:20:24.413634    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 12:20:24.416161    3028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 12:20:24.421024    3028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 12:20:24.500222    3028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 12:20:24.568189    3028 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 12:20:24.568206    3028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 12:20:24.573333    3028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:20:24.652030    3028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:20:25.816017    3028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163986667s)
	I0524 12:20:25.816077    3028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 12:20:25.892588    3028 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 12:20:25.973018    3028 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 12:20:26.051489    3028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:20:26.132213    3028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 12:20:26.144503    3028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:20:26.223245    3028 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 12:20:26.247011    3028 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 12:20:26.247086    3028 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 12:20:26.249167    3028 start.go:549] Will wait 60s for crictl version
	I0524 12:20:26.249193    3028 ssh_runner.go:195] Run: which crictl
	I0524 12:20:26.250676    3028 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 12:20:26.268116    3028 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 12:20:26.268188    3028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:20:26.278423    3028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:20:26.301010    3028 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 12:20:26.301158    3028 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 12:20:26.302570    3028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 12:20:26.306642    3028 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:20:26.306680    3028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:20:26.314258    3028 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 12:20:26.314264    3028 docker.go:563] Images already preloaded, skipping extraction
	I0524 12:20:26.314321    3028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:20:26.321755    3028 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 12:20:26.321775    3028 cache_images.go:84] Images are preloaded, skipping loading
	I0524 12:20:26.321825    3028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 12:20:26.331329    3028 cni.go:84] Creating CNI manager for ""
	I0524 12:20:26.331335    3028 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:20:26.331344    3028 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 12:20:26.331352    3028 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-594000 NodeName:image-594000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 12:20:26.331433    3028 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-594000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 12:20:26.331466    3028 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-594000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:image-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 12:20:26.331516    3028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 12:20:26.334914    3028 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 12:20:26.334941    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 12:20:26.338068    3028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0524 12:20:26.343537    3028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 12:20:26.348843    3028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0524 12:20:26.353929    3028 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0524 12:20:26.355255    3028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 12:20:26.359168    3028 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000 for IP: 192.168.105.5
	I0524 12:20:26.359175    3028 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.359304    3028 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 12:20:26.359562    3028 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 12:20:26.359589    3028 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.key
	I0524 12:20:26.359595    3028 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.crt with IP's: []
	I0524 12:20:26.626816    3028 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.crt ...
	I0524 12:20:26.626822    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.crt: {Name:mk979f6962974ffc15b2c60a05f6d1f8a42ea6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.627186    3028 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.key ...
	I0524 12:20:26.627188    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/client.key: {Name:mk667389cd79e2ec73303a224735d2bd89b0ff13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.627309    3028 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key.e69b33ca
	I0524 12:20:26.627316    3028 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 12:20:26.740572    3028 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt.e69b33ca ...
	I0524 12:20:26.740575    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt.e69b33ca: {Name:mka24cc906f47214667d3fe21c02fe29f917506a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.740786    3028 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key.e69b33ca ...
	I0524 12:20:26.740788    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key.e69b33ca: {Name:mk9dc456c7bc26ec2ff0e2ddfc62b12e993b7666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.740895    3028 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt
	I0524 12:20:26.741104    3028 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key
	I0524 12:20:26.741204    3028 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.key
	I0524 12:20:26.741210    3028 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.crt with IP's: []
	I0524 12:20:26.811718    3028 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.crt ...
	I0524 12:20:26.811720    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.crt: {Name:mk608d49c28dbfea66ae93bcb574386342942c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.811854    3028 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.key ...
	I0524 12:20:26.811856    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.key: {Name:mk8cf723307f0d0836aa5f572e51a979be4a857c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:26.812098    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem (1338 bytes)
	W0524 12:20:26.812484    3028 certs.go:433] ignoring /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454_empty.pem, impossibly tiny 0 bytes
	I0524 12:20:26.812491    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 12:20:26.812514    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 12:20:26.812531    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 12:20:26.812548    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 12:20:26.812589    3028 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:20:26.812879    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 12:20:26.820181    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 12:20:26.827104    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 12:20:26.834628    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/image-594000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0524 12:20:26.842062    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 12:20:26.849115    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 12:20:26.856408    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 12:20:26.863126    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 12:20:26.870572    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1708 bytes)
	I0524 12:20:26.878044    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 12:20:26.885137    3028 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem --> /usr/share/ca-certificates/1454.pem (1338 bytes)
	I0524 12:20:26.891914    3028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 12:20:26.896965    3028 ssh_runner.go:195] Run: openssl version
	I0524 12:20:26.898944    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 12:20:26.902438    3028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:20:26.904007    3028 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:20:26.904028    3028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:20:26.905864    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 12:20:26.908839    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1454.pem && ln -fs /usr/share/ca-certificates/1454.pem /etc/ssl/certs/1454.pem"
	I0524 12:20:26.911786    3028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1454.pem
	I0524 12:20:26.913307    3028 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 19:16 /usr/share/ca-certificates/1454.pem
	I0524 12:20:26.913328    3028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1454.pem
	I0524 12:20:26.915175    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1454.pem /etc/ssl/certs/51391683.0"
	I0524 12:20:26.918485    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I0524 12:20:26.921661    3028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I0524 12:20:26.923171    3028 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 19:16 /usr/share/ca-certificates/14542.pem
	I0524 12:20:26.923190    3028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I0524 12:20:26.924988    3028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 12:20:26.927732    3028 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 12:20:26.929144    3028 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 12:20:26.929170    3028 kubeadm.go:404] StartCluster: {Name:image-594000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.2 ClusterName:image-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:20:26.929229    3028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 12:20:26.936353    3028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 12:20:26.939224    3028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 12:20:26.942307    3028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 12:20:26.944904    3028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 12:20:26.944916    3028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 12:20:26.964999    3028 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 12:20:26.965020    3028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 12:20:27.018306    3028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 12:20:27.018361    3028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 12:20:27.018403    3028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 12:20:27.075725    3028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 12:20:27.085770    3028 out.go:204]   - Generating certificates and keys ...
	I0524 12:20:27.085816    3028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 12:20:27.085845    3028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 12:20:27.114288    3028 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 12:20:27.158740    3028 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 12:20:27.785878    3028 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 12:20:27.840724    3028 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 12:20:28.047169    3028 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 12:20:28.047240    3028 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-594000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0524 12:20:28.082000    3028 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 12:20:28.082060    3028 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-594000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0524 12:20:28.123889    3028 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 12:20:28.437263    3028 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 12:20:28.549171    3028 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 12:20:28.549198    3028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 12:20:28.844051    3028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 12:20:28.890559    3028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 12:20:29.042521    3028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 12:20:29.187527    3028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 12:20:29.194372    3028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 12:20:29.194432    3028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 12:20:29.194450    3028 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 12:20:29.284579    3028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 12:20:29.288877    3028 out.go:204]   - Booting up control plane ...
	I0524 12:20:29.288929    3028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 12:20:29.288986    3028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 12:20:29.289057    3028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 12:20:29.289118    3028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 12:20:29.289295    3028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 12:20:33.289199    3028 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002156 seconds
	I0524 12:20:33.289394    3028 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 12:20:33.304221    3028 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 12:20:33.817889    3028 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 12:20:33.818004    3028 kubeadm.go:322] [mark-control-plane] Marking the node image-594000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 12:20:34.324067    3028 kubeadm.go:322] [bootstrap-token] Using token: ihsmrr.wgwpewvmjksejiw0
	I0524 12:20:34.328361    3028 out.go:204]   - Configuring RBAC rules ...
	I0524 12:20:34.328412    3028 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 12:20:34.332928    3028 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 12:20:34.335323    3028 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 12:20:34.336447    3028 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 12:20:34.337489    3028 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 12:20:34.338622    3028 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 12:20:34.342946    3028 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 12:20:34.522267    3028 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 12:20:34.735574    3028 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 12:20:34.736064    3028 kubeadm.go:322] 
	I0524 12:20:34.736094    3028 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 12:20:34.736096    3028 kubeadm.go:322] 
	I0524 12:20:34.736130    3028 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 12:20:34.736131    3028 kubeadm.go:322] 
	I0524 12:20:34.736142    3028 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 12:20:34.736176    3028 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 12:20:34.736197    3028 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 12:20:34.736199    3028 kubeadm.go:322] 
	I0524 12:20:34.736219    3028 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 12:20:34.736221    3028 kubeadm.go:322] 
	I0524 12:20:34.736260    3028 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 12:20:34.736263    3028 kubeadm.go:322] 
	I0524 12:20:34.736294    3028 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 12:20:34.736332    3028 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 12:20:34.736376    3028 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 12:20:34.736378    3028 kubeadm.go:322] 
	I0524 12:20:34.736419    3028 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 12:20:34.736461    3028 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 12:20:34.736463    3028 kubeadm.go:322] 
	I0524 12:20:34.736512    3028 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ihsmrr.wgwpewvmjksejiw0 \
	I0524 12:20:34.736564    3028 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 12:20:34.736574    3028 kubeadm.go:322] 	--control-plane 
	I0524 12:20:34.736577    3028 kubeadm.go:322] 
	I0524 12:20:34.736615    3028 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 12:20:34.736616    3028 kubeadm.go:322] 
	I0524 12:20:34.736660    3028 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ihsmrr.wgwpewvmjksejiw0 \
	I0524 12:20:34.736707    3028 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 12:20:34.736879    3028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 12:20:34.736961    3028 kubeadm.go:322] W0524 19:20:27.369205    1353 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 12:20:34.737045    3028 kubeadm.go:322] W0524 19:20:29.636268    1353 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 12:20:34.737049    3028 cni.go:84] Creating CNI manager for ""
	I0524 12:20:34.737056    3028 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:20:34.746727    3028 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 12:20:34.750926    3028 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 12:20:34.753955    3028 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 12:20:34.758766    3028 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 12:20:34.758829    3028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=image-594000 minikube.k8s.io/updated_at=2023_05_24T12_20_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:20:34.758833    3028 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:20:34.813235    3028 kubeadm.go:1076] duration metric: took 54.439208ms to wait for elevateKubeSystemPrivileges.
	I0524 12:20:34.830484    3028 ops.go:34] apiserver oom_adj: -16
	I0524 12:20:34.830492    3028 kubeadm.go:406] StartCluster complete in 7.901387s
	I0524 12:20:34.830504    3028 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:34.830588    3028 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:20:34.830961    3028 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:34.831148    3028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 12:20:34.831200    3028 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 12:20:34.831239    3028 addons.go:66] Setting storage-provisioner=true in profile "image-594000"
	I0524 12:20:34.831241    3028 config.go:182] Loaded profile config "image-594000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:20:34.831247    3028 addons.go:228] Setting addon storage-provisioner=true in "image-594000"
	I0524 12:20:34.831259    3028 addons.go:66] Setting default-storageclass=true in profile "image-594000"
	I0524 12:20:34.831265    3028 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-594000"
	I0524 12:20:34.831267    3028 host.go:66] Checking if "image-594000" exists ...
	I0524 12:20:34.834374    3028 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:20:34.838802    3028 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:20:34.838806    3028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 12:20:34.838812    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:34.843507    3028 addons.go:228] Setting addon default-storageclass=true in "image-594000"
	I0524 12:20:34.843521    3028 host.go:66] Checking if "image-594000" exists ...
	I0524 12:20:34.844234    3028 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 12:20:34.844237    3028 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 12:20:34.844242    3028 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/image-594000/id_rsa Username:docker}
	I0524 12:20:34.880716    3028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 12:20:34.883654    3028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 12:20:34.920744    3028 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:20:35.312969    3028 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0524 12:20:35.349814    3028 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-594000" context rescaled to 1 replicas
	I0524 12:20:35.349837    3028 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:20:35.353864    3028 out.go:177] * Verifying Kubernetes components...
	I0524 12:20:35.357800    3028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 12:20:35.398821    3028 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0524 12:20:35.396363    3028 api_server.go:52] waiting for apiserver process to appear ...
	I0524 12:20:35.405776    3028 addons.go:499] enable addons completed in 574.581417ms: enabled=[default-storageclass storage-provisioner]
	I0524 12:20:35.405816    3028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:20:35.409888    3028 api_server.go:72] duration metric: took 60.040584ms to wait for apiserver process to appear ...
	I0524 12:20:35.409891    3028 api_server.go:88] waiting for apiserver healthz status ...
	I0524 12:20:35.409897    3028 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0524 12:20:35.412836    3028 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0524 12:20:35.413448    3028 api_server.go:141] control plane version: v1.27.2
	I0524 12:20:35.413452    3028 api_server.go:131] duration metric: took 3.559041ms to wait for apiserver health ...
	I0524 12:20:35.413457    3028 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 12:20:35.416333    3028 system_pods.go:59] 5 kube-system pods found
	I0524 12:20:35.416338    3028 system_pods.go:61] "etcd-image-594000" [fcbb96c2-f76e-4040-b995-5f43c31c281f] Pending
	I0524 12:20:35.416340    3028 system_pods.go:61] "kube-apiserver-image-594000" [44f0bdea-fafb-4ba9-acdf-aed8b3cd428f] Pending
	I0524 12:20:35.416342    3028 system_pods.go:61] "kube-controller-manager-image-594000" [10c636ee-0100-4953-a01a-a600543a1930] Pending
	I0524 12:20:35.416344    3028 system_pods.go:61] "kube-scheduler-image-594000" [dfe61ada-a2fb-4dc8-a01d-4f9971168e82] Pending
	I0524 12:20:35.416346    3028 system_pods.go:61] "storage-provisioner" [de0982a7-6f76-47c4-9000-b277fa47315c] Pending
	I0524 12:20:35.416347    3028 system_pods.go:74] duration metric: took 2.888833ms to wait for pod list to return data ...
	I0524 12:20:35.416350    3028 kubeadm.go:581] duration metric: took 66.5035ms to wait for : map[apiserver:true system_pods:true] ...
	I0524 12:20:35.416355    3028 node_conditions.go:102] verifying NodePressure condition ...
	I0524 12:20:35.417824    3028 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 12:20:35.417830    3028 node_conditions.go:123] node cpu capacity is 2
	I0524 12:20:35.417835    3028 node_conditions.go:105] duration metric: took 1.478625ms to run NodePressure ...
	I0524 12:20:35.417839    3028 start.go:228] waiting for startup goroutines ...
	I0524 12:20:35.417841    3028 start.go:233] waiting for cluster config update ...
	I0524 12:20:35.417845    3028 start.go:242] writing updated cluster config ...
	I0524 12:20:35.418098    3028 ssh_runner.go:195] Run: rm -f paused
	I0524 12:20:35.447231    3028 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0524 12:20:35.450752    3028 out.go:177] 
	W0524 12:20:35.454870    3028 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 12:20:35.457730    3028 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 12:20:35.471784    3028 out.go:177] * Done! kubectl is now configured to use "image-594000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 19:20:16 UTC, ends at Wed 2023-05-24 19:20:38 UTC. --
	May 24 19:20:30 image-594000 cri-dockerd[1187]: time="2023-05-24T19:20:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a4db9f0bf97e23c5f96c90d3ed0df7b0b5156e964a383892faba3fe5bc2fe7f9/resolv.conf as [nameserver 192.168.105.1]"
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.713051173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.713169007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.713199382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.713221840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:30 image-594000 cri-dockerd[1187]: time="2023-05-24T19:20:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fff483781f0f0c417dc5bf9c0de690c62a8f97ac80f5e93ea4947bd7d73e3be/resolv.conf as [nameserver 192.168.105.1]"
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.743462173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.743597132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.743627673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.743662173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.806175423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.806288007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.806314298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:20:30 image-594000 dockerd[973]: time="2023-05-24T19:20:30.806335673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:37 image-594000 dockerd[967]: time="2023-05-24T19:20:37.494163677Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 24 19:20:37 image-594000 dockerd[967]: time="2023-05-24T19:20:37.646071010Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 24 19:20:37 image-594000 dockerd[967]: time="2023-05-24T19:20:37.679344968Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.712116843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.712146302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.712317052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.712348302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:20:37 image-594000 dockerd[967]: time="2023-05-24T19:20:37.858654718Z" level=info msg="ignoring event" container=1acd4219c4734f8ce32cba1947295f49c8e018c3b792aba03fc49385b1066be5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.858961802Z" level=info msg="shim disconnected" id=1acd4219c4734f8ce32cba1947295f49c8e018c3b792aba03fc49385b1066be5 namespace=moby
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.858997885Z" level=warning msg="cleaning up after shim disconnected" id=1acd4219c4734f8ce32cba1947295f49c8e018c3b792aba03fc49385b1066be5 namespace=moby
	May 24 19:20:37 image-594000 dockerd[973]: time="2023-05-24T19:20:37.859002802Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d34293166bf95       305d7ed1dae28       8 seconds ago       Running             kube-scheduler            0                   3fff483781f0f
	aea58f9794e0e       2ee705380c3c5       8 seconds ago       Running             kube-controller-manager   0                   a4db9f0bf97e2
	30952be2630f7       72c9df6be7f1b       8 seconds ago       Running             kube-apiserver            0                   7919a1df2e803
	1aff01570378a       24bc64e911039       8 seconds ago       Running             etcd                      0                   d03eafcc78fb9
	
	* 
	* ==> describe nodes <==
	* Name:               image-594000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-594000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=image-594000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T12_20_34_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:20:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-594000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:20:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:20:36 +0000   Wed, 24 May 2023 19:20:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:20:36 +0000   Wed, 24 May 2023 19:20:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:20:36 +0000   Wed, 24 May 2023 19:20:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:20:36 +0000   Wed, 24 May 2023 19:20:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-594000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cd6237501cb42dba9518dfc8d2be816
	  System UUID:                8cd6237501cb42dba9518dfc8d2be816
	  Boot ID:                    371dc4c2-39d3-4cc0-ae47-6de9bf32e38b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-594000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-594000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-594000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-594000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-594000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-594000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-594000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2s    kubelet  Node image-594000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May24 19:20] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.644354] EINJ: EINJ table not found.
	[  +0.519669] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043282] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.169675] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.082032] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.751538] systemd-fstab-generator[739]: Ignoring "noauto" for root device
	[  +1.585221] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +0.192202] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.067059] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.084297] systemd-fstab-generator[960]: Ignoring "noauto" for root device
	[  +1.152328] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088583] systemd-fstab-generator[1106]: Ignoring "noauto" for root device
	[  +0.078201] systemd-fstab-generator[1117]: Ignoring "noauto" for root device
	[  +0.080347] systemd-fstab-generator[1128]: Ignoring "noauto" for root device
	[  +0.081754] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +0.090299] systemd-fstab-generator[1180]: Ignoring "noauto" for root device
	[  +3.052965] systemd-fstab-generator[1446]: Ignoring "noauto" for root device
	[  +5.143122] systemd-fstab-generator[2368]: Ignoring "noauto" for root device
	[  +2.965736] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [1aff01570378] <==
	* {"level":"info","ts":"2023-05-24T19:20:30.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-05-24T19:20:30.741Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-05-24T19:20:30.746Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-24T19:20:30.746Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T19:20:30.746Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T19:20:30.746Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-05-24T19:20:30.746Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-05-24T19:20:30.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-594000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:20:30.843Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:20:30.844Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T19:20:30.842Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T19:20:30.844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T19:20:30.845Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	
	* 
	* ==> kernel <==
	*  19:20:38 up 0 min,  0 users,  load average: 0.37, 0.08, 0.03
	Linux image-594000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [30952be2630f] <==
	* I0524 19:20:32.138387       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0524 19:20:32.138404       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 19:20:32.138863       1 controller.go:624] quota admission added evaluator for: namespaces
	I0524 19:20:32.145509       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 19:20:32.145518       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 19:20:32.145524       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 19:20:32.145532       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 19:20:32.145676       1 cache.go:39] Caches are synced for autoregister controller
	I0524 19:20:32.146153       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 19:20:32.167266       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 19:20:32.170792       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:20:32.870191       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:20:33.054859       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0524 19:20:33.063593       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0524 19:20:33.063612       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 19:20:33.238603       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:20:33.253121       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 19:20:33.267192       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0524 19:20:33.269089       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0524 19:20:33.269433       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 19:20:33.270758       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 19:20:34.078958       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 19:20:34.868093       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 19:20:34.872865       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0524 19:20:34.877917       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [aea58f9794e0] <==
	* I0524 19:20:36.268484       1 controllermanager.go:638] "Started controller" controller="deployment"
	I0524 19:20:36.268533       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0524 19:20:36.268539       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0524 19:20:36.418786       1 controllermanager.go:638] "Started controller" controller="statefulset"
	I0524 19:20:36.418850       1 stateful_set.go:161] "Starting stateful set controller"
	I0524 19:20:36.418859       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	E0524 19:20:36.568931       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0524 19:20:36.568943       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	I0524 19:20:36.718150       1 controllermanager.go:638] "Started controller" controller="clusterrole-aggregation"
	I0524 19:20:36.718186       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0524 19:20:36.718190       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0524 19:20:36.972895       1 controllermanager.go:638] "Started controller" controller="namespace"
	I0524 19:20:36.972931       1 namespace_controller.go:197] "Starting namespace controller"
	I0524 19:20:36.972938       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0524 19:20:37.118092       1 controllermanager.go:638] "Started controller" controller="replicaset"
	I0524 19:20:37.118131       1 replica_set.go:201] "Starting controller" name="replicaset"
	I0524 19:20:37.118135       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0524 19:20:37.167871       1 controllermanager.go:638] "Started controller" controller="csrapproving"
	I0524 19:20:37.167902       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0524 19:20:37.167917       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0524 19:20:37.328045       1 controllermanager.go:638] "Started controller" controller="persistentvolume-binder"
	I0524 19:20:37.328101       1 pv_controller_base.go:323] "Starting persistent volume controller"
	I0524 19:20:37.328107       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0524 19:20:37.469455       1 controllermanager.go:638] "Started controller" controller="bootstrapsigner"
	I0524 19:20:37.469492       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	* 
	* ==> kube-scheduler [d34293166bf9] <==
	* W0524 19:20:32.086177       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:20:32.086181       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0524 19:20:32.086236       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 19:20:32.086244       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 19:20:32.086178       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 19:20:32.086410       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 19:20:32.086761       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0524 19:20:32.086770       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0524 19:20:32.899457       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 19:20:32.899501       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 19:20:32.929535       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 19:20:32.929577       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 19:20:32.966905       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 19:20:32.966945       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 19:20:33.042608       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 19:20:33.042662       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0524 19:20:33.042757       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0524 19:20:33.042777       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0524 19:20:33.052460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:20:33.052487       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0524 19:20:33.098350       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0524 19:20:33.098438       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0524 19:20:33.183298       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 19:20:33.183387       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0524 19:20:33.683356       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:20:16 UTC, ends at Wed 2023-05-24 19:20:38 UTC. --
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.029661    2374 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.029726    2374 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.029741    2374 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.029754    2374 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:20:35 image-594000 kubelet[2374]: E0524 19:20:35.035407    2374 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-594000\" already exists" pod="kube-system/kube-scheduler-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113497    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a3e4db76ca132fd3a9f1139017d1a309-etcd-certs\") pod \"etcd-image-594000\" (UID: \"a3e4db76ca132fd3a9f1139017d1a309\") " pod="kube-system/etcd-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113515    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a3e4db76ca132fd3a9f1139017d1a309-etcd-data\") pod \"etcd-image-594000\" (UID: \"a3e4db76ca132fd3a9f1139017d1a309\") " pod="kube-system/etcd-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113526    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1907423d76953bb39075517c23ea038-usr-share-ca-certificates\") pod \"kube-apiserver-image-594000\" (UID: \"f1907423d76953bb39075517c23ea038\") " pod="kube-system/kube-apiserver-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113535    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abf14a3928240429775f8b423767a26f-ca-certs\") pod \"kube-controller-manager-image-594000\" (UID: \"abf14a3928240429775f8b423767a26f\") " pod="kube-system/kube-controller-manager-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113546    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abf14a3928240429775f8b423767a26f-k8s-certs\") pod \"kube-controller-manager-image-594000\" (UID: \"abf14a3928240429775f8b423767a26f\") " pod="kube-system/kube-controller-manager-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113556    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abf14a3928240429775f8b423767a26f-kubeconfig\") pod \"kube-controller-manager-image-594000\" (UID: \"abf14a3928240429775f8b423767a26f\") " pod="kube-system/kube-controller-manager-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113566    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abf14a3928240429775f8b423767a26f-usr-share-ca-certificates\") pod \"kube-controller-manager-image-594000\" (UID: \"abf14a3928240429775f8b423767a26f\") " pod="kube-system/kube-controller-manager-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113576    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/111bdbf05b290967a59110c1e98e61d5-kubeconfig\") pod \"kube-scheduler-image-594000\" (UID: \"111bdbf05b290967a59110c1e98e61d5\") " pod="kube-system/kube-scheduler-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113586    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1907423d76953bb39075517c23ea038-ca-certs\") pod \"kube-apiserver-image-594000\" (UID: \"f1907423d76953bb39075517c23ea038\") " pod="kube-system/kube-apiserver-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113594    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1907423d76953bb39075517c23ea038-k8s-certs\") pod \"kube-apiserver-image-594000\" (UID: \"f1907423d76953bb39075517c23ea038\") " pod="kube-system/kube-apiserver-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.113604    2374 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abf14a3928240429775f8b423767a26f-flexvolume-dir\") pod \"kube-controller-manager-image-594000\" (UID: \"abf14a3928240429775f8b423767a26f\") " pod="kube-system/kube-controller-manager-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.900145    2374 apiserver.go:52] "Watching apiserver"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.912134    2374 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.931032    2374 reconciler.go:41] "Reconciler: start to sync state"
	May 24 19:20:35 image-594000 kubelet[2374]: E0524 19:20:35.977245    2374 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-594000\" already exists" pod="kube-system/kube-apiserver-image-594000"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.985584    2374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-594000" podStartSLOduration=0.985559426 podCreationTimestamp="2023-05-24 19:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:20:35.981336509 +0000 UTC m=+1.126704002" watchObservedRunningTime="2023-05-24 19:20:35.985559426 +0000 UTC m=+1.130926919"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.990016    2374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-594000" podStartSLOduration=0.989983509 podCreationTimestamp="2023-05-24 19:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:20:35.985805384 +0000 UTC m=+1.131172877" watchObservedRunningTime="2023-05-24 19:20:35.989983509 +0000 UTC m=+1.135351002"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.990043    2374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-594000" podStartSLOduration=0.990036884 podCreationTimestamp="2023-05-24 19:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:20:35.989783509 +0000 UTC m=+1.135151002" watchObservedRunningTime="2023-05-24 19:20:35.990036884 +0000 UTC m=+1.135404377"
	May 24 19:20:35 image-594000 kubelet[2374]: I0524 19:20:35.997547    2374 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-594000" podStartSLOduration=1.997524884 podCreationTimestamp="2023-05-24 19:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:20:35.993086093 +0000 UTC m=+1.138453585" watchObservedRunningTime="2023-05-24 19:20:35.997524884 +0000 UTC m=+1.142892377"
	May 24 19:20:36 image-594000 kubelet[2374]: I0524 19:20:36.671580    2374 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-594000 -n image-594000
helpers_test.go:261: (dbg) Run:  kubectl --context image-594000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-594000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-594000 describe pod storage-provisioner: exit status 1 (37.377917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-594000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-607000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-607000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.221007625s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-607000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-607000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e1153bf0-d618-4f58-b399-cc8007d9b57a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e1153bf0-d618-4f58-b399-cc8007d9b57a] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.011974083s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-607000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E0524 12:22:50.579555    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.025743667s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons disable ingress-dns --alsologtostderr -v=1: (6.252554667s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons disable ingress --alsologtostderr -v=1: (7.075810416s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-607000 -n ingress-addon-legacy-607000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-097000 ssh findmnt            | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT |                     |
	|                | -T /mount3                               |                             |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-097000 ssh pgrep              | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-097000 image build -t         | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | localhost/my-image:functional-097000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-097000 image ls               | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	| image          | functional-097000                        | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-097000                     | functional-097000           | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	| start          | -p image-594000 --driver=qemu2           | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-594000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-594000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-594000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-594000                          |                             |         |         |                     |                     |
	| delete         | -p image-594000                          | image-594000                | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:20 PDT |
	| start          | -p ingress-addon-legacy-607000           | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:20 PDT | 24 May 23 12:21 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-607000              | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:21 PDT | 24 May 23 12:22 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-607000              | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:22 PDT | 24 May 23 12:22 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-607000              | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:22 PDT | 24 May 23 12:22 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-607000 ip           | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:22 PDT | 24 May 23 12:22 PDT |
	| addons         | ingress-addon-legacy-607000              | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:22 PDT | 24 May 23 12:22 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-607000              | ingress-addon-legacy-607000 | jenkins | v1.30.1 | 24 May 23 12:22 PDT | 24 May 23 12:23 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 12:20:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 12:20:38.720077    3068 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:20:38.720216    3068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:20:38.720219    3068 out.go:309] Setting ErrFile to fd 2...
	I0524 12:20:38.720221    3068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:20:38.720293    3068 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:20:38.721332    3068 out.go:303] Setting JSON to false
	I0524 12:20:38.736650    3068 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3009,"bootTime":1684953029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:20:38.736735    3068 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:20:38.739373    3068 out.go:177] * [ingress-addon-legacy-607000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:20:38.747379    3068 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:20:38.747410    3068 notify.go:220] Checking for updates...
	I0524 12:20:38.754319    3068 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:20:38.755306    3068 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:20:38.758330    3068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:20:38.761372    3068 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:20:38.764375    3068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:20:38.767575    3068 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:20:38.771332    3068 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:20:38.778375    3068 start.go:295] selected driver: qemu2
	I0524 12:20:38.778383    3068 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:20:38.778391    3068 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:20:38.780431    3068 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:20:38.784338    3068 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:20:38.787534    3068 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:20:38.787565    3068 cni.go:84] Creating CNI manager for ""
	I0524 12:20:38.787572    3068 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:20:38.787577    3068 start_flags.go:319] config:
	{Name:ingress-addon-legacy-607000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-607000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0524 12:20:38.787654    3068 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:20:38.794339    3068 out.go:177] * Starting control plane node ingress-addon-legacy-607000 in cluster ingress-addon-legacy-607000
	I0524 12:20:38.798341    3068 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0524 12:20:38.850140    3068 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0524 12:20:38.850164    3068 cache.go:57] Caching tarball of preloaded images
	I0524 12:20:38.850322    3068 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0524 12:20:38.855406    3068 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0524 12:20:38.862324    3068 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0524 12:20:38.941152    3068 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0524 12:20:44.237142    3068 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0524 12:20:44.237286    3068 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0524 12:20:44.986394    3068 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0524 12:20:44.986580    3068 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/config.json ...
	I0524 12:20:44.986598    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/config.json: {Name:mk7830467c05fdcfa6bfd1a9df6b133eb311ce3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:20:44.986812    3068 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:20:44.986825    3068 start.go:364] acquiring machines lock for ingress-addon-legacy-607000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:20:44.986851    3068 start.go:368] acquired machines lock for "ingress-addon-legacy-607000" in 22.417µs
	I0524 12:20:44.986864    3068 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-607000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:20:44.986902    3068 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:20:44.995918    3068 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0524 12:20:45.010577    3068 start.go:159] libmachine.API.Create for "ingress-addon-legacy-607000" (driver="qemu2")
	I0524 12:20:45.010601    3068 client.go:168] LocalClient.Create starting
	I0524 12:20:45.010667    3068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:20:45.010686    3068 main.go:141] libmachine: Decoding PEM data...
	I0524 12:20:45.010698    3068 main.go:141] libmachine: Parsing certificate...
	I0524 12:20:45.010742    3068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:20:45.010757    3068 main.go:141] libmachine: Decoding PEM data...
	I0524 12:20:45.010766    3068 main.go:141] libmachine: Parsing certificate...
	I0524 12:20:45.011102    3068 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:20:45.148634    3068 main.go:141] libmachine: Creating SSH key...
	I0524 12:20:45.209786    3068 main.go:141] libmachine: Creating Disk image...
	I0524 12:20:45.209791    3068 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:20:45.209936    3068 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2
	I0524 12:20:45.218406    3068 main.go:141] libmachine: STDOUT: 
	I0524 12:20:45.218439    3068 main.go:141] libmachine: STDERR: 
	I0524 12:20:45.218505    3068 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2 +20000M
	I0524 12:20:45.225781    3068 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:20:45.225793    3068 main.go:141] libmachine: STDERR: 
	I0524 12:20:45.225818    3068 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2
	I0524 12:20:45.225827    3068 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:20:45.225862    3068 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6e:af:ae:e6:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/disk.qcow2
	I0524 12:20:45.260126    3068 main.go:141] libmachine: STDOUT: 
	I0524 12:20:45.260154    3068 main.go:141] libmachine: STDERR: 
	I0524 12:20:45.260161    3068 main.go:141] libmachine: Attempt 0
	I0524 12:20:45.260178    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:45.260252    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:45.260273    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:45.260286    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:45.260295    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:45.260301    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:47.262485    3068 main.go:141] libmachine: Attempt 1
	I0524 12:20:47.262567    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:47.262896    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:47.262948    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:47.263001    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:47.263034    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:47.263064    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:49.265181    3068 main.go:141] libmachine: Attempt 2
	I0524 12:20:49.265216    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:49.265357    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:49.265371    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:49.265377    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:49.265382    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:49.265387    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:51.267408    3068 main.go:141] libmachine: Attempt 3
	I0524 12:20:51.267416    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:51.267462    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:51.267470    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:51.267479    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:51.267485    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:51.267491    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:53.269496    3068 main.go:141] libmachine: Attempt 4
	I0524 12:20:53.269506    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:53.269539    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:53.269544    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:53.269563    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:53.269571    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:53.269582    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:55.269954    3068 main.go:141] libmachine: Attempt 5
	I0524 12:20:55.269976    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:55.270091    3068 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0524 12:20:55.270107    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:7a:af:ab:50:7:7c ID:1,7a:af:ab:50:7:7c Lease:0x646fb4f0}
	I0524 12:20:55.270115    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:b6:eb:c5:bc:3:d ID:1,b6:eb:c5:bc:3:d Lease:0x646fb426}
	I0524 12:20:55.270120    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d2:3b:c2:bf:1f:f9 ID:1,d2:3b:c2:bf:1f:f9 Lease:0x646e6299}
	I0524 12:20:55.270135    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a:73:48:f5:f9:b3 ID:1,a:73:48:f5:f9:b3 Lease:0x646faaa3}
	I0524 12:20:57.272210    3068 main.go:141] libmachine: Attempt 6
	I0524 12:20:57.272259    3068 main.go:141] libmachine: Searching for 16:6e:af:ae:e6:af in /var/db/dhcpd_leases ...
	I0524 12:20:57.272384    3068 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0524 12:20:57.272396    3068 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:16:6e:af:ae:e6:af ID:1,16:6e:af:ae:e6:af Lease:0x646fb518}
	I0524 12:20:57.272401    3068 main.go:141] libmachine: Found match: 16:6e:af:ae:e6:af
	I0524 12:20:57.272412    3068 main.go:141] libmachine: IP: 192.168.105.6
	I0524 12:20:57.272418    3068 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0524 12:20:58.278925    3068 machine.go:88] provisioning docker machine ...
	I0524 12:20:58.278951    3068 buildroot.go:166] provisioning hostname "ingress-addon-legacy-607000"
	I0524 12:20:58.279004    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:58.279258    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:58.279266    3068 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-607000 && echo "ingress-addon-legacy-607000" | sudo tee /etc/hostname
	I0524 12:20:58.349740    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-607000
	
	I0524 12:20:58.349807    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:58.350064    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:58.350073    3068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-607000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-607000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-607000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 12:20:58.415702    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 12:20:58.415712    3068 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16573-1024/.minikube CaCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16573-1024/.minikube}
	I0524 12:20:58.415719    3068 buildroot.go:174] setting up certificates
	I0524 12:20:58.415727    3068 provision.go:83] configureAuth start
	I0524 12:20:58.415733    3068 provision.go:138] copyHostCerts
	I0524 12:20:58.415760    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem
	I0524 12:20:58.415806    3068 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem, removing ...
	I0524 12:20:58.415812    3068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem
	I0524 12:20:58.415940    3068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.pem (1078 bytes)
	I0524 12:20:58.416100    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem
	I0524 12:20:58.416124    3068 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem, removing ...
	I0524 12:20:58.416128    3068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem
	I0524 12:20:58.416174    3068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/cert.pem (1123 bytes)
	I0524 12:20:58.416250    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem
	I0524 12:20:58.416286    3068 exec_runner.go:144] found /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem, removing ...
	I0524 12:20:58.416289    3068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem
	I0524 12:20:58.416329    3068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16573-1024/.minikube/key.pem (1675 bytes)
	I0524 12:20:58.416406    3068 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-607000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-607000]
	I0524 12:20:58.596512    3068 provision.go:172] copyRemoteCerts
	I0524 12:20:58.596558    3068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 12:20:58.596570    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:20:58.631146    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0524 12:20:58.631211    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 12:20:58.638427    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0524 12:20:58.638469    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0524 12:20:58.645542    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0524 12:20:58.645579    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 12:20:58.652165    3068 provision.go:86] duration metric: configureAuth took 236.431458ms
	I0524 12:20:58.652176    3068 buildroot.go:189] setting minikube options for container-runtime
	I0524 12:20:58.652271    3068 config.go:182] Loaded profile config "ingress-addon-legacy-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0524 12:20:58.652311    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:58.652528    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:58.652533    3068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 12:20:58.716567    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 12:20:58.716574    3068 buildroot.go:70] root file system type: tmpfs
	I0524 12:20:58.716631    3068 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 12:20:58.716678    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:58.716921    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:58.716958    3068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 12:20:58.785094    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 12:20:58.785149    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:58.785403    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:58.785412    3068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 12:20:59.094890    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 12:20:59.094903    3068 machine.go:91] provisioned docker machine in 815.9745ms
	I0524 12:20:59.094908    3068 client.go:171] LocalClient.Create took 14.084418291s
	I0524 12:20:59.094922    3068 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-607000" took 14.084465s
	I0524 12:20:59.094932    3068 start.go:300] post-start starting for "ingress-addon-legacy-607000" (driver="qemu2")
	I0524 12:20:59.094935    3068 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 12:20:59.095002    3068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 12:20:59.095012    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:20:59.130879    3068 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 12:20:59.132265    3068 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 12:20:59.132275    3068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/addons for local assets ...
	I0524 12:20:59.132343    3068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16573-1024/.minikube/files for local assets ...
	I0524 12:20:59.132450    3068 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem -> 14542.pem in /etc/ssl/certs
	I0524 12:20:59.132461    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem -> /etc/ssl/certs/14542.pem
	I0524 12:20:59.132568    3068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 12:20:59.135623    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:20:59.141905    3068 start.go:303] post-start completed in 46.965375ms
	I0524 12:20:59.142281    3068 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/config.json ...
	I0524 12:20:59.142447    3068 start.go:128] duration metric: createHost completed in 14.155657125s
	I0524 12:20:59.142477    3068 main.go:141] libmachine: Using SSH client type: native
	I0524 12:20:59.142700    3068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1053986d0] 0x10539b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0524 12:20:59.142705    3068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 12:20:59.208450    3068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684956059.586346918
	
	I0524 12:20:59.208458    3068 fix.go:207] guest clock: 1684956059.586346918
	I0524 12:20:59.208462    3068 fix.go:220] Guest: 2023-05-24 12:20:59.586346918 -0700 PDT Remote: 2023-05-24 12:20:59.142452 -0700 PDT m=+20.441890085 (delta=443.894918ms)
	I0524 12:20:59.208473    3068 fix.go:191] guest clock delta is within tolerance: 443.894918ms
	I0524 12:20:59.208475    3068 start.go:83] releasing machines lock for "ingress-addon-legacy-607000", held for 14.221736125s
	I0524 12:20:59.208766    3068 ssh_runner.go:195] Run: cat /version.json
	I0524 12:20:59.208774    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:20:59.208785    3068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 12:20:59.208806    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:20:59.288846    3068 ssh_runner.go:195] Run: systemctl --version
	I0524 12:20:59.291419    3068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 12:20:59.293571    3068 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 12:20:59.293612    3068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0524 12:20:59.297203    3068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0524 12:20:59.303299    3068 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 12:20:59.303316    3068 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0524 12:20:59.303411    3068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:20:59.316749    3068 docker.go:633] Got preloaded images: 
	I0524 12:20:59.316759    3068 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0524 12:20:59.316810    3068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:20:59.319953    3068 ssh_runner.go:195] Run: which lz4
	I0524 12:20:59.321368    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0524 12:20:59.321467    3068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 12:20:59.322780    3068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 12:20:59.322793    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0524 12:21:01.034749    3068 docker.go:597] Took 1.713347 seconds to copy over tarball
	I0524 12:21:01.034823    3068 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 12:21:02.368288    3068 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.333462792s)
	I0524 12:21:02.368302    3068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 12:21:02.388066    3068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:21:02.391534    3068 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0524 12:21:02.396718    3068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:21:02.463208    3068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:21:03.823390    3068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.360171208s)
	I0524 12:21:03.823419    3068 start.go:481] detecting cgroup driver to use...
	I0524 12:21:03.823486    3068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:21:03.829613    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0524 12:21:03.832577    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 12:21:03.835793    3068 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 12:21:03.835825    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 12:21:03.839368    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:21:03.842555    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 12:21:03.845354    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 12:21:03.848203    3068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 12:21:03.851530    3068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 12:21:03.854799    3068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 12:21:03.857599    3068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 12:21:03.860296    3068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:21:03.922740    3068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 12:21:03.929246    3068 start.go:481] detecting cgroup driver to use...
	I0524 12:21:03.929311    3068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 12:21:03.934734    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:21:03.940154    3068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 12:21:03.947684    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 12:21:03.952622    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 12:21:03.957325    3068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 12:21:03.999505    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 12:21:04.005127    3068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 12:21:04.010536    3068 ssh_runner.go:195] Run: which cri-dockerd
	I0524 12:21:04.011839    3068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 12:21:04.014937    3068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 12:21:04.020146    3068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 12:21:04.082540    3068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 12:21:04.142761    3068 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 12:21:04.142775    3068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 12:21:04.148127    3068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:21:04.210413    3068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:21:05.354690    3068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.144269792s)
	I0524 12:21:05.354757    3068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:21:05.366288    3068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 12:21:05.379694    3068 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	I0524 12:21:05.379839    3068 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0524 12:21:05.381198    3068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 12:21:05.384868    3068 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0524 12:21:05.384925    3068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:21:05.392474    3068 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0524 12:21:05.392482    3068 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0524 12:21:05.392525    3068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:21:05.396079    3068 ssh_runner.go:195] Run: which lz4
	I0524 12:21:05.397442    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0524 12:21:05.397538    3068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 12:21:05.398811    3068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 12:21:05.398827    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0524 12:21:07.047701    3068 docker.go:597] Took 1.650219 seconds to copy over tarball
	I0524 12:21:07.047805    3068 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 12:21:08.444898    3068 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.397088625s)
	I0524 12:21:08.444911    3068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 12:21:08.467342    3068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 12:21:08.471390    3068 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0524 12:21:08.477251    3068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 12:21:08.547755    3068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 12:21:10.351669    3068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.803912583s)
	I0524 12:21:10.351759    3068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 12:21:10.361690    3068 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0524 12:21:10.361698    3068 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0524 12:21:10.361702    3068 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0524 12:21:10.385758    3068 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:10.386047    3068 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0524 12:21:10.389860    3068 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0524 12:21:10.391642    3068 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0524 12:21:10.391644    3068 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0524 12:21:10.391710    3068 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0524 12:21:10.391718    3068 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0524 12:21:10.392512    3068 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0524 12:21:10.394822    3068 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:10.397474    3068 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0524 12:21:10.397828    3068 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0524 12:21:10.397879    3068 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0524 12:21:10.399760    3068 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0524 12:21:10.400090    3068 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0524 12:21:10.399582    3068 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0524 12:21:10.400780    3068 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W0524 12:21:11.539173    3068 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:11.539282    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:11.547297    3068 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0524 12:21:11.547321    3068 docker.go:313] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:11.547369    3068 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:11.562664    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0524 12:21:11.917591    3068 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:11.917732    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0524 12:21:11.927519    3068 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0524 12:21:11.927540    3068 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0524 12:21:11.927595    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W0524 12:21:11.933151    3068 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:11.933236    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0524 12:21:11.934745    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0524 12:21:11.940992    3068 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0524 12:21:11.941015    3068 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0524 12:21:11.941058    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0524 12:21:11.949740    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0524 12:21:11.970582    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0524 12:21:11.978546    3068 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0524 12:21:11.978572    3068 docker.go:313] Removing image: registry.k8s.io/pause:3.2
	I0524 12:21:11.978605    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0524 12:21:11.986338    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0524 12:21:12.136535    3068 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:12.136654    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0524 12:21:12.144665    3068 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0524 12:21:12.144685    3068 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0524 12:21:12.144725    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0524 12:21:12.152317    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0524 12:21:12.200510    3068 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:12.200618    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0524 12:21:12.208537    3068 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0524 12:21:12.208557    3068 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0524 12:21:12.208597    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0524 12:21:12.215709    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0524 12:21:12.352930    3068 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:12.353170    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0524 12:21:12.368570    3068 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0524 12:21:12.368610    3068 docker.go:313] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0524 12:21:12.368687    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0524 12:21:12.380720    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0524 12:21:12.571687    3068 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0524 12:21:12.572403    3068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0524 12:21:12.601025    3068 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0524 12:21:12.601069    3068 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.7
	I0524 12:21:12.601169    3068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0524 12:21:12.619449    3068 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0524 12:21:12.619528    3068 cache_images.go:92] LoadImages completed in 2.257829875s
	W0524 12:21:12.619695    3068 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
	I0524 12:21:12.619779    3068 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 12:21:12.637607    3068 cni.go:84] Creating CNI manager for ""
	I0524 12:21:12.637622    3068 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:21:12.637628    3068 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 12:21:12.637653    3068 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-607000 NodeName:ingress-addon-legacy-607000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0524 12:21:12.637779    3068 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-607000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 12:21:12.637843    3068 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-607000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-607000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 12:21:12.637914    3068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0524 12:21:12.642760    3068 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 12:21:12.642821    3068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 12:21:12.646652    3068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0524 12:21:12.652651    3068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0524 12:21:12.658643    3068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0524 12:21:12.664056    3068 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0524 12:21:12.665321    3068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 12:21:12.668619    3068 certs.go:56] Setting up /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000 for IP: 192.168.105.6
	I0524 12:21:12.668632    3068 certs.go:190] acquiring lock for shared ca certs: {Name:mk53f82f750243d1079819acfe50ecbc2a56595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:12.668977    3068 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key
	I0524 12:21:12.669129    3068 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key
	I0524 12:21:12.669161    3068 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key
	I0524 12:21:12.669167    3068 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt with IP's: []
	I0524 12:21:12.708804    3068 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt ...
	I0524 12:21:12.708809    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: {Name:mkac3ab6068cdbe3cf81bf600d528d73a30258bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:12.709018    3068 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key ...
	I0524 12:21:12.709024    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key: {Name:mkd175e946488ab70df246da661d374cda729a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:12.709288    3068 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key.b354f644
	I0524 12:21:12.709304    3068 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 12:21:12.896866    3068 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt.b354f644 ...
	I0524 12:21:12.896871    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt.b354f644: {Name:mka572017e383dc3a71eeda97247bff39598c850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:12.897058    3068 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key.b354f644 ...
	I0524 12:21:12.897061    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key.b354f644: {Name:mk816b468557e5bc1f800ca700a55b3a544d0fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:12.897191    3068 certs.go:337] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt
	I0524 12:21:12.897435    3068 certs.go:341] copying /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key
	I0524 12:21:12.897543    3068 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.key
	I0524 12:21:12.897554    3068 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.crt with IP's: []
	I0524 12:21:13.056773    3068 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.crt ...
	I0524 12:21:13.056778    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.crt: {Name:mka24c0c4dfb9624958383341091138cc73917cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:13.056940    3068 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.key ...
	I0524 12:21:13.056943    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.key: {Name:mk2d883982294989492b87abca9be956feadbc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:13.057065    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0524 12:21:13.057080    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0524 12:21:13.057091    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0524 12:21:13.057104    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0524 12:21:13.057116    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 12:21:13.057129    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0524 12:21:13.057145    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 12:21:13.057155    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 12:21:13.057248    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem (1338 bytes)
	W0524 12:21:13.057677    3068 certs.go:433] ignoring /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454_empty.pem, impossibly tiny 0 bytes
	I0524 12:21:13.057685    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 12:21:13.057901    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem (1078 bytes)
	I0524 12:21:13.058100    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem (1123 bytes)
	I0524 12:21:13.058333    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/certs/key.pem (1675 bytes)
	I0524 12:21:13.058452    3068 certs.go:437] found cert: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem (1708 bytes)
	I0524 12:21:13.058640    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:21:13.058656    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem -> /usr/share/ca-certificates/1454.pem
	I0524 12:21:13.058665    3068 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem -> /usr/share/ca-certificates/14542.pem
	I0524 12:21:13.059034    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 12:21:13.067312    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0524 12:21:13.074332    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 12:21:13.081018    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 12:21:13.087617    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 12:21:13.094696    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 12:21:13.101698    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 12:21:13.108176    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 12:21:13.115356    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 12:21:13.122314    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/1454.pem --> /usr/share/ca-certificates/1454.pem (1338 bytes)
	I0524 12:21:13.128829    3068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/ssl/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1708 bytes)
	I0524 12:21:13.135549    3068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 12:21:13.140486    3068 ssh_runner.go:195] Run: openssl version
	I0524 12:21:13.142419    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 12:21:13.145352    3068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:21:13.146830    3068 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:36 /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:21:13.146856    3068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 12:21:13.148761    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 12:21:13.151915    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1454.pem && ln -fs /usr/share/ca-certificates/1454.pem /etc/ssl/certs/1454.pem"
	I0524 12:21:13.155317    3068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1454.pem
	I0524 12:21:13.156920    3068 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 19:16 /usr/share/ca-certificates/1454.pem
	I0524 12:21:13.156942    3068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1454.pem
	I0524 12:21:13.158746    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1454.pem /etc/ssl/certs/51391683.0"
	I0524 12:21:13.161731    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I0524 12:21:13.164648    3068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I0524 12:21:13.166047    3068 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 19:16 /usr/share/ca-certificates/14542.pem
	I0524 12:21:13.166066    3068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I0524 12:21:13.167710    3068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 12:21:13.171268    3068 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 12:21:13.172644    3068 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 12:21:13.172676    3068 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-607000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-607000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:21:13.172743    3068 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 12:21:13.179848    3068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 12:21:13.183306    3068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 12:21:13.186311    3068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 12:21:13.189177    3068 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 12:21:13.189200    3068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0524 12:21:13.212763    3068 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0524 12:21:13.212810    3068 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 12:21:13.298889    3068 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 12:21:13.298943    3068 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 12:21:13.298994    3068 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 12:21:13.357661    3068 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 12:21:13.358985    3068 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 12:21:13.359016    3068 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 12:21:13.429158    3068 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 12:21:13.437361    3068 out.go:204]   - Generating certificates and keys ...
	I0524 12:21:13.437390    3068 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 12:21:13.437415    3068 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 12:21:13.547705    3068 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 12:21:13.689262    3068 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 12:21:13.815141    3068 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 12:21:13.980330    3068 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 12:21:14.031937    3068 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 12:21:14.032012    3068 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-607000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0524 12:21:14.154749    3068 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 12:21:14.154816    3068 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-607000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0524 12:21:14.213083    3068 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 12:21:14.341070    3068 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 12:21:14.439417    3068 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 12:21:14.439445    3068 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 12:21:14.502362    3068 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 12:21:14.634969    3068 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 12:21:14.705680    3068 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 12:21:14.743883    3068 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 12:21:14.744304    3068 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 12:21:14.752525    3068 out.go:204]   - Booting up control plane ...
	I0524 12:21:14.752578    3068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 12:21:14.752630    3068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 12:21:14.752671    3068 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 12:21:14.752707    3068 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 12:21:14.752773    3068 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 12:21:26.252900    3068 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502784 seconds
	I0524 12:21:26.252960    3068 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 12:21:26.258937    3068 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 12:21:26.788568    3068 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 12:21:26.788821    3068 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-607000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0524 12:21:27.294446    3068 kubeadm.go:322] [bootstrap-token] Using token: pumn9y.6p2xv14x36yaotoy
	I0524 12:21:27.301566    3068 out.go:204]   - Configuring RBAC rules ...
	I0524 12:21:27.301648    3068 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 12:21:27.301706    3068 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 12:21:27.304376    3068 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 12:21:27.308253    3068 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 12:21:27.309795    3068 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 12:21:27.310642    3068 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 12:21:27.317789    3068 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 12:21:27.522868    3068 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 12:21:27.728879    3068 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 12:21:27.729334    3068 kubeadm.go:322] 
	I0524 12:21:27.729364    3068 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 12:21:27.729366    3068 kubeadm.go:322] 
	I0524 12:21:27.729401    3068 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 12:21:27.729405    3068 kubeadm.go:322] 
	I0524 12:21:27.729418    3068 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 12:21:27.729453    3068 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 12:21:27.729489    3068 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 12:21:27.729496    3068 kubeadm.go:322] 
	I0524 12:21:27.729541    3068 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 12:21:27.729591    3068 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 12:21:27.729626    3068 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 12:21:27.729630    3068 kubeadm.go:322] 
	I0524 12:21:27.729684    3068 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 12:21:27.729736    3068 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 12:21:27.729739    3068 kubeadm.go:322] 
	I0524 12:21:27.729781    3068 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pumn9y.6p2xv14x36yaotoy \
	I0524 12:21:27.729832    3068 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 \
	I0524 12:21:27.729847    3068 kubeadm.go:322]     --control-plane 
	I0524 12:21:27.729859    3068 kubeadm.go:322] 
	I0524 12:21:27.729901    3068 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 12:21:27.729905    3068 kubeadm.go:322] 
	I0524 12:21:27.729961    3068 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pumn9y.6p2xv14x36yaotoy \
	I0524 12:21:27.730032    3068 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31e7298e5fe39dd9013d3c0dfbef354d505381f39a7a09e3b4af334242438797 
	I0524 12:21:27.730250    3068 kubeadm.go:322] W0524 19:21:13.590739    1556 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0524 12:21:27.730356    3068 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0524 12:21:27.730430    3068 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0524 12:21:27.730502    3068 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 12:21:27.730571    3068 kubeadm.go:322] W0524 19:21:15.126180    1556 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0524 12:21:27.730657    3068 kubeadm.go:322] W0524 19:21:15.126665    1556 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0524 12:21:27.730663    3068 cni.go:84] Creating CNI manager for ""
	I0524 12:21:27.730671    3068 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:21:27.730684    3068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 12:21:27.730756    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:27.730759    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=ingress-addon-legacy-607000 minikube.k8s.io/updated_at=2023_05_24T12_21_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:27.798528    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:27.799432    3068 ops.go:34] apiserver oom_adj: -16
	I0524 12:21:28.338232    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:28.838329    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:29.338176    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:29.838263    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:30.338243    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:30.838118    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:31.338213    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:31.838093    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:32.338186    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:32.838167    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:33.337883    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:33.838132    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:34.337936    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:34.838131    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:35.337954    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:35.837099    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:36.337519    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:36.837882    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:37.337819    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:37.836225    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:38.338134    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:38.838076    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:39.338084    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:39.838155    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:40.338113    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:40.838248    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:41.337868    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:41.837903    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:42.337910    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:42.837308    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:43.337994    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:43.837845    3068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 12:21:43.871455    3068 kubeadm.go:1076] duration metric: took 16.140894583s to wait for elevateKubeSystemPrivileges.
	I0524 12:21:43.871467    3068 kubeadm.go:406] StartCluster complete in 30.699044917s
	I0524 12:21:43.871479    3068 settings.go:142] acquiring lock: {Name:mke0e8586c5ffdfb76a30452ad9385e81e1593cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:43.871586    3068 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:21:43.872150    3068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/kubeconfig: {Name:mkd6a5851332ae81ab607caaee690ec1266dd411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:21:43.872323    3068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 12:21:43.872389    3068 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 12:21:43.872447    3068 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-607000"
	I0524 12:21:43.872454    3068 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-607000"
	I0524 12:21:43.872477    3068 host.go:66] Checking if "ingress-addon-legacy-607000" exists ...
	I0524 12:21:43.872480    3068 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-607000"
	I0524 12:21:43.872488    3068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-607000"
	I0524 12:21:43.872588    3068 config.go:182] Loaded profile config "ingress-addon-legacy-607000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0524 12:21:43.872598    3068 kapi.go:59] client config for ingress-addon-legacy-607000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063ed290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 12:21:43.873002    3068 cert_rotation.go:137] Starting client certificate rotation controller
	I0524 12:21:43.873476    3068 kapi.go:59] client config for ingress-addon-legacy-607000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063ed290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 12:21:43.877781    3068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:21:43.880974    3068 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:21:43.880980    3068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 12:21:43.880988    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:21:43.884805    3068 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-607000"
	I0524 12:21:43.884821    3068 host.go:66] Checking if "ingress-addon-legacy-607000" exists ...
	I0524 12:21:43.885530    3068 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 12:21:43.885535    3068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 12:21:43.885542    3068 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/ingress-addon-legacy-607000/id_rsa Username:docker}
	I0524 12:21:43.933046    3068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 12:21:43.938356    3068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 12:21:43.961257    3068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 12:21:44.126239    3068 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0524 12:21:44.181633    3068 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0524 12:21:44.185685    3068 addons.go:499] enable addons completed in 313.320458ms: enabled=[storage-provisioner default-storageclass]
	I0524 12:21:44.390042    3068 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-607000" context rescaled to 1 replicas
	I0524 12:21:44.390063    3068 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:21:44.393666    3068 out.go:177] * Verifying Kubernetes components...
	I0524 12:21:44.399684    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 12:21:44.405119    3068 kapi.go:59] client config for ingress-addon-legacy-607000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.key", CAFile:"/Users/jenkins/minikube-integration/16573-1024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063ed290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 12:21:44.405250    3068 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-607000" to be "Ready" ...
	I0524 12:21:44.406766    3068 node_ready.go:49] node "ingress-addon-legacy-607000" has status "Ready":"True"
	I0524 12:21:44.406771    3068 node_ready.go:38] duration metric: took 1.512375ms waiting for node "ingress-addon-legacy-607000" to be "Ready" ...
	I0524 12:21:44.406774    3068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:21:44.409743    3068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-c665b" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:46.415844    3068 pod_ready.go:102] pod "coredns-66bff467f8-c665b" in "kube-system" namespace has status "Ready":"False"
	I0524 12:21:48.424660    3068 pod_ready.go:102] pod "coredns-66bff467f8-c665b" in "kube-system" namespace has status "Ready":"False"
	I0524 12:21:50.425931    3068 pod_ready.go:102] pod "coredns-66bff467f8-c665b" in "kube-system" namespace has status "Ready":"False"
	I0524 12:21:52.922717    3068 pod_ready.go:102] pod "coredns-66bff467f8-c665b" in "kube-system" namespace has status "Ready":"False"
	I0524 12:21:54.921434    3068 pod_ready.go:92] pod "coredns-66bff467f8-c665b" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:54.921454    3068 pod_ready.go:81] duration metric: took 10.511789375s waiting for pod "coredns-66bff467f8-c665b" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.921463    3068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.925634    3068 pod_ready.go:92] pod "etcd-ingress-addon-legacy-607000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:54.925645    3068 pod_ready.go:81] duration metric: took 4.1755ms waiting for pod "etcd-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.925654    3068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.930059    3068 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-607000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:54.930069    3068 pod_ready.go:81] duration metric: took 4.407709ms waiting for pod "kube-apiserver-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.930076    3068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.933780    3068 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-607000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:54.933790    3068 pod_ready.go:81] duration metric: took 3.706458ms waiting for pod "kube-controller-manager-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.933797    3068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m8bml" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.937174    3068 pod_ready.go:92] pod "kube-proxy-m8bml" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:54.937186    3068 pod_ready.go:81] duration metric: took 3.3845ms waiting for pod "kube-proxy-m8bml" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:54.937192    3068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:55.114995    3068 request.go:628] Waited for 177.658917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-607000
	I0524 12:21:55.314953    3068 request.go:628] Waited for 193.557666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-607000
	I0524 12:21:55.323648    3068 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-607000" in "kube-system" namespace has status "Ready":"True"
	I0524 12:21:55.323676    3068 pod_ready.go:81] duration metric: took 386.473458ms waiting for pod "kube-scheduler-ingress-addon-legacy-607000" in "kube-system" namespace to be "Ready" ...
	I0524 12:21:55.323721    3068 pod_ready.go:38] duration metric: took 10.9170255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 12:21:55.323762    3068 api_server.go:52] waiting for apiserver process to appear ...
	I0524 12:21:55.323997    3068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 12:21:55.341283    3068 api_server.go:72] duration metric: took 10.951285333s to wait for apiserver process to appear ...
	I0524 12:21:55.341310    3068 api_server.go:88] waiting for apiserver healthz status ...
	I0524 12:21:55.341330    3068 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0524 12:21:55.350323    3068 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0524 12:21:55.351434    3068 api_server.go:141] control plane version: v1.18.20
	I0524 12:21:55.351451    3068 api_server.go:131] duration metric: took 10.133042ms to wait for apiserver health ...
	I0524 12:21:55.351458    3068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 12:21:55.514918    3068 request.go:628] Waited for 163.3775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0524 12:21:55.528038    3068 system_pods.go:59] 7 kube-system pods found
	I0524 12:21:55.528081    3068 system_pods.go:61] "coredns-66bff467f8-c665b" [da5172f2-efe5-474a-beb7-7583a40f6ad7] Running
	I0524 12:21:55.528093    3068 system_pods.go:61] "etcd-ingress-addon-legacy-607000" [f776309f-6347-4b9e-a649-c88f8c40f86a] Running
	I0524 12:21:55.528103    3068 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-607000" [b8ec7c89-04f2-4ec4-8674-6450c9668331] Running
	I0524 12:21:55.528112    3068 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-607000" [d0e4815b-68ac-44df-9dcd-310138f209d6] Running
	I0524 12:21:55.528135    3068 system_pods.go:61] "kube-proxy-m8bml" [acbe8e27-ffff-443f-8ddc-effad23c1de2] Running
	I0524 12:21:55.528152    3068 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-607000" [1ed42f5b-7b5f-4b81-ae07-8edd2aa713f6] Running
	I0524 12:21:55.528162    3068 system_pods.go:61] "storage-provisioner" [5a33113d-72c0-4707-bd8f-ff9c3bfb47c9] Running
	I0524 12:21:55.528173    3068 system_pods.go:74] duration metric: took 176.709875ms to wait for pod list to return data ...
	I0524 12:21:55.528185    3068 default_sa.go:34] waiting for default service account to be created ...
	I0524 12:21:55.714934    3068 request.go:628] Waited for 186.6085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0524 12:21:55.721903    3068 default_sa.go:45] found service account: "default"
	I0524 12:21:55.721943    3068 default_sa.go:55] duration metric: took 193.742583ms for default service account to be created ...
	I0524 12:21:55.721962    3068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 12:21:55.914897    3068 request.go:628] Waited for 192.832875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0524 12:21:55.927395    3068 system_pods.go:86] 7 kube-system pods found
	I0524 12:21:55.927434    3068 system_pods.go:89] "coredns-66bff467f8-c665b" [da5172f2-efe5-474a-beb7-7583a40f6ad7] Running
	I0524 12:21:55.927446    3068 system_pods.go:89] "etcd-ingress-addon-legacy-607000" [f776309f-6347-4b9e-a649-c88f8c40f86a] Running
	I0524 12:21:55.927457    3068 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-607000" [b8ec7c89-04f2-4ec4-8674-6450c9668331] Running
	I0524 12:21:55.927482    3068 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-607000" [d0e4815b-68ac-44df-9dcd-310138f209d6] Running
	I0524 12:21:55.927495    3068 system_pods.go:89] "kube-proxy-m8bml" [acbe8e27-ffff-443f-8ddc-effad23c1de2] Running
	I0524 12:21:55.927508    3068 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-607000" [1ed42f5b-7b5f-4b81-ae07-8edd2aa713f6] Running
	I0524 12:21:55.927517    3068 system_pods.go:89] "storage-provisioner" [5a33113d-72c0-4707-bd8f-ff9c3bfb47c9] Running
	I0524 12:21:55.927534    3068 system_pods.go:126] duration metric: took 205.56425ms to wait for k8s-apps to be running ...
	I0524 12:21:55.927549    3068 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 12:21:55.927805    3068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 12:21:55.944761    3068 system_svc.go:56] duration metric: took 17.206208ms WaitForService to wait for kubelet.
	I0524 12:21:55.944782    3068 kubeadm.go:581] duration metric: took 11.554796625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 12:21:55.944805    3068 node_conditions.go:102] verifying NodePressure condition ...
	I0524 12:21:56.114913    3068 request.go:628] Waited for 170.018167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0524 12:21:56.123575    3068 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0524 12:21:56.123626    3068 node_conditions.go:123] node cpu capacity is 2
	I0524 12:21:56.123657    3068 node_conditions.go:105] duration metric: took 178.844459ms to run NodePressure ...
	I0524 12:21:56.123685    3068 start.go:228] waiting for startup goroutines ...
	I0524 12:21:56.123708    3068 start.go:233] waiting for cluster config update ...
	I0524 12:21:56.123735    3068 start.go:242] writing updated cluster config ...
	I0524 12:21:56.125047    3068 ssh_runner.go:195] Run: rm -f paused
	I0524 12:21:56.277252    3068 start.go:568] kubectl: 1.25.9, cluster: 1.18.20 (minor skew: 7)
	I0524 12:21:56.282034    3068 out.go:177] 
	W0524 12:21:56.286064    3068 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.18.20.
	I0524 12:21:56.289822    3068 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0524 12:21:56.299006    3068 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-607000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 19:20:56 UTC, ends at Wed 2023-05-24 19:23:05 UTC. --
	May 24 19:22:40 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:40.198580637Z" level=warning msg="cleaning up after shim disconnected" id=b86309c7dab530cf983fc979133cc6e7baeff0dd64f299f19c339ca9d0fd9618 namespace=moby
	May 24 19:22:40 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:40.198585221Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:22:40 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:22:40.198955034Z" level=info msg="ignoring event" container=b86309c7dab530cf983fc979133cc6e7baeff0dd64f299f19c339ca9d0fd9618 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:22:52 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:52.341040811Z" level=info msg="shim disconnected" id=83b52ea07595f1e81080c880e5f589bc1ba9fea3397f48aff4a5c193efb91eb6 namespace=moby
	May 24 19:22:52 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:22:52.341096772Z" level=info msg="ignoring event" container=83b52ea07595f1e81080c880e5f589bc1ba9fea3397f48aff4a5c193efb91eb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:22:52 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:52.341295656Z" level=warning msg="cleaning up after shim disconnected" id=83b52ea07595f1e81080c880e5f589bc1ba9fea3397f48aff4a5c193efb91eb6 namespace=moby
	May 24 19:22:52 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:52.341309907Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.402061540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.402121210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.402499518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.402539436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:22:56.448515359Z" level=info msg="ignoring event" container=28dbc97ccc0e4e64bcc8770c19c227ab8e26c5ae1e727e5cef8b3c65d4a81686 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.448513775Z" level=info msg="shim disconnected" id=28dbc97ccc0e4e64bcc8770c19c227ab8e26c5ae1e727e5cef8b3c65d4a81686 namespace=moby
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.448788954Z" level=warning msg="cleaning up after shim disconnected" id=28dbc97ccc0e4e64bcc8770c19c227ab8e26c5ae1e727e5cef8b3c65d4a81686 namespace=moby
	May 24 19:22:56 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:22:56.448826872Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:23:00.811794713Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=784fc1f49111c0c4e7648b888abfc98bb15d78f5528f2bda30138698fcead55b
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:23:00.820080767Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=784fc1f49111c0c4e7648b888abfc98bb15d78f5528f2bda30138698fcead55b
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:23:00.912738170Z" level=info msg="ignoring event" container=784fc1f49111c0c4e7648b888abfc98bb15d78f5528f2bda30138698fcead55b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.913136978Z" level=info msg="shim disconnected" id=784fc1f49111c0c4e7648b888abfc98bb15d78f5528f2bda30138698fcead55b namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.913210231Z" level=warning msg="cleaning up after shim disconnected" id=784fc1f49111c0c4e7648b888abfc98bb15d78f5528f2bda30138698fcead55b namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.913221065Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1215]: time="2023-05-24T19:23:00.955590331Z" level=info msg="ignoring event" container=65482952e4e1659e0a8af3c4fc2d3d473233a894a1c18352e7c0dc594817ae1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.955817882Z" level=info msg="shim disconnected" id=65482952e4e1659e0a8af3c4fc2d3d473233a894a1c18352e7c0dc594817ae1e namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.955881759Z" level=warning msg="cleaning up after shim disconnected" id=65482952e4e1659e0a8af3c4fc2d3d473233a894a1c18352e7c0dc594817ae1e namespace=moby
	May 24 19:23:00 ingress-addon-legacy-607000 dockerd[1221]: time="2023-05-24T19:23:00.955887718Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	28dbc97ccc0e4       13753a81eccfd                                                                                                      9 seconds ago        Exited              hello-world-app           2                   f213aa586d6f6
	9b68581c7b322       nginx@sha256:02ffd439b71d9ea9408e449b568f65c0bbbb94bebd8750f1d80231ab6496008e                                      36 seconds ago       Running             nginx                     0                   6591f8e0f8834
	784fc1f49111c       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   65482952e4e16
	8998ca9991e77       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   b567b55a7a04f
	6f93653c92e72       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   3cd622df43e80
	faf0961a24d18       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   37c701463cea5
	edcf434c4a38e       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   77d4005b56a6b
	07f77e4cb68cf       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   5234bbb268481
	07d0ff3757ca7       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   8739fbcd85be9
	aa13ce9adc819       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   0ca9915876abf
	ce82e3a4a5300       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   3ef040c375f5d
	ced566283fdcc       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   04557ffd82071
	
	* 
	* ==> coredns [faf0961a24d1] <==
	* [INFO] 172.17.0.1:39067 - 28137 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006167s
	[INFO] 172.17.0.1:58542 - 34020 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012793s
	[INFO] 172.17.0.1:2073 - 54524 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029044s
	[INFO] 172.17.0.1:39067 - 39812 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045753s
	[INFO] 172.17.0.1:58542 - 46236 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009584s
	[INFO] 172.17.0.1:58542 - 51996 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015001s
	[INFO] 172.17.0.1:39067 - 57132 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029835s
	[INFO] 172.17.0.1:2073 - 47643 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000008834s
	[INFO] 172.17.0.1:58542 - 48090 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011459s
	[INFO] 172.17.0.1:2073 - 32483 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008084s
	[INFO] 172.17.0.1:39067 - 31597 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024877s
	[INFO] 172.17.0.1:58542 - 3 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008834s
	[INFO] 172.17.0.1:2073 - 18455 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009543s
	[INFO] 172.17.0.1:39067 - 51304 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003296s
	[INFO] 172.17.0.1:58542 - 1478 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015459s
	[INFO] 172.17.0.1:2073 - 27656 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009084s
	[INFO] 172.17.0.1:2073 - 13068 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009s
	[INFO] 172.17.0.1:2073 - 62811 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009501s
	[INFO] 172.17.0.1:55939 - 40603 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000023126s
	[INFO] 172.17.0.1:55939 - 12764 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012501s
	[INFO] 172.17.0.1:55939 - 60982 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030585s
	[INFO] 172.17.0.1:55939 - 56754 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050128s
	[INFO] 172.17.0.1:55939 - 17929 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046419s
	[INFO] 172.17.0.1:55939 - 30102 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012876s
	[INFO] 172.17.0.1:55939 - 20630 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000019334s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-607000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-607000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=ingress-addon-legacy-607000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T12_21_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:21:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-607000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:23:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:23:04 +0000   Wed, 24 May 2023 19:21:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:23:04 +0000   Wed, 24 May 2023 19:21:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:23:04 +0000   Wed, 24 May 2023 19:21:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:23:04 +0000   Wed, 24 May 2023 19:21:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-607000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4004084Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4004084Ki
	  pods:               110
	System Info:
	  Machine ID:                 27603f37cf31435eab7da445e3629130
	  System UUID:                27603f37cf31435eab7da445e3629130
	  Boot ID:                    2658fecf-69c4-4ee3-9651-e7755c1bca47
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7wcxd                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-c665b                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     82s
	  kube-system                 etcd-ingress-addon-legacy-607000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-apiserver-ingress-addon-legacy-607000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-607000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-m8bml                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-ingress-addon-legacy-607000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 91s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  91s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s   kubelet     Node ingress-addon-legacy-607000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s   kubelet     Node ingress-addon-legacy-607000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s   kubelet     Node ingress-addon-legacy-607000 status is now: NodeHasSufficientPID
	  Normal  Starting                 81s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [May24 19:20] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.638375] EINJ: EINJ table not found.
	[  +0.524288] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043206] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000797] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.223567] systemd-fstab-generator[475]: Ignoring "noauto" for root device
	[  +0.063423] systemd-fstab-generator[486]: Ignoring "noauto" for root device
	[May24 19:21] systemd-fstab-generator[776]: Ignoring "noauto" for root device
	[  +1.278704] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.181511] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.158242] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.061663] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.067870] systemd-fstab-generator[996]: Ignoring "noauto" for root device
	[  +4.327782] systemd-fstab-generator[1208]: Ignoring "noauto" for root device
	[  +1.792130] kauditd_printk_skb: 68 callbacks suppressed
	[  +3.088753] systemd-fstab-generator[1686]: Ignoring "noauto" for root device
	[  +8.471325] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.113507] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.379840] systemd-fstab-generator[2775]: Ignoring "noauto" for root device
	[ +17.524215] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.069558] kauditd_printk_skb: 7 callbacks suppressed
	[May24 19:22] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +35.169280] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [07d0ff3757ca] <==
	* raft2023/05/24 19:21:23 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/05/24 19:21:23 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/05/24 19:21:23 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/05/24 19:21:23 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-05-24 19:21:23.796219 W | auth: simple token is not cryptographically signed
	2023-05-24 19:21:23.797074 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-05-24 19:21:23.798322 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/05/24 19:21:23 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-05-24 19:21:23.798629 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-05-24 19:21:23.799258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-05-24 19:21:23.799355 I | embed: listening for peers on 192.168.105.6:2380
	2023-05-24 19:21:23.799433 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/05/24 19:21:24 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/05/24 19:21:24 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/05/24 19:21:24 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/05/24 19:21:24 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/05/24 19:21:24 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-05-24 19:21:24.005536 I | etcdserver: published {Name:ingress-addon-legacy-607000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-05-24 19:21:24.005579 I | etcdserver: setting up the initial cluster version to 3.4
	2023-05-24 19:21:24.005645 I | embed: ready to serve client requests
	2023-05-24 19:21:24.005714 I | embed: ready to serve client requests
	2023-05-24 19:21:24.006341 I | embed: serving client requests on 127.0.0.1:2379
	2023-05-24 19:21:24.006458 I | embed: serving client requests on 192.168.105.6:2379
	2023-05-24 19:21:24.008237 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-05-24 19:21:24.008261 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  19:23:05 up 2 min,  0 users,  load average: 0.48, 0.22, 0.08
	Linux ingress-addon-legacy-607000 5.10.57 #1 SMP PREEMPT Sat May 20 00:35:14 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ce82e3a4a530] <==
	* I0524 19:21:25.438447       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0524 19:21:25.469353       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0524 19:21:25.537232       1 cache.go:39] Caches are synced for autoregister controller
	I0524 19:21:25.537388       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 19:21:25.537437       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0524 19:21:25.537814       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0524 19:21:25.537896       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 19:21:25.545567       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:21:26.436442       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0524 19:21:26.436625       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:21:26.448907       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0524 19:21:26.453985       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0524 19:21:26.454016       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0524 19:21:26.592626       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:21:26.605456       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0524 19:21:26.713363       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0524 19:21:26.713852       1 controller.go:609] quota admission added evaluator for: endpoints
	I0524 19:21:26.715457       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 19:21:27.746783       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0524 19:21:27.892862       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0524 19:21:28.082434       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0524 19:21:43.849642       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0524 19:21:44.397921       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0524 19:21:56.531842       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0524 19:22:26.717906       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ced566283fdc] <==
	* I0524 19:21:43.947257       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-607000", UID:"d66cd7b1-3da1-45d1-8f54-531547e5479b", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-607000 event: Registered Node ingress-addon-legacy-607000 in Controller
	I0524 19:21:44.028779       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0524 19:21:44.099127       1 shared_informer.go:230] Caches are synced for disruption 
	I0524 19:21:44.099135       1 disruption.go:339] Sending events to api server.
	I0524 19:21:44.223403       1 shared_informer.go:230] Caches are synced for HPA 
	I0524 19:21:44.275398       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0524 19:21:44.356118       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0524 19:21:44.356132       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0524 19:21:44.396167       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0524 19:21:44.402557       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"bbc877da-3965-4f67-a419-123a4fc696d5", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m8bml
	E0524 19:21:44.412765       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"bbc877da-3965-4f67-a419-123a4fc696d5", ResourceVersion:"206", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63820552888, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000b75600), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000b75660)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000b756c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40019559c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000b75740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b757a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000b75860)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000a4f180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001199158), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400010bce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400024ffb0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40011991c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0524 19:21:44.414625       1 shared_informer.go:230] Caches are synced for stateful set 
	I0524 19:21:44.433029       1 shared_informer.go:230] Caches are synced for resource quota 
	I0524 19:21:44.433065       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0524 19:21:44.874996       1 request.go:621] Throttling request took 1.049419962s, request: GET:https://control-plane.minikube.internal:8443/apis/authorization.k8s.io/v1?timeout=32s
	I0524 19:21:45.475968       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0524 19:21:45.476009       1 shared_informer.go:230] Caches are synced for resource quota 
	I0524 19:21:56.525493       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a18aab7f-be7c-4bdf-ab51-d887599940e4", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0524 19:21:56.537259       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"99820040-2ad5-40ba-8d8a-d9e9510cd08e", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hvc67
	I0524 19:21:56.537363       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9d9035ec-7bf5-42d4-ab8c-a8c1c533f65e", APIVersion:"batch/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-swrvj
	I0524 19:21:56.567246       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fb279aae-4f0e-47b7-b91a-db71804e7387", APIVersion:"batch/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hhns9
	I0524 19:21:59.590988       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"fb279aae-4f0e-47b7-b91a-db71804e7387", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0524 19:21:59.609252       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9d9035ec-7bf5-42d4-ab8c-a8c1c533f65e", APIVersion:"batch/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0524 19:22:37.017581       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"19f29967-fc58-426d-ac39-5f1683868bb3", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0524 19:22:37.031387       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"11bfada9-9042-49cf-aa4e-5a63daed9d5c", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7wcxd
	
	* 
	* ==> kube-proxy [07f77e4cb68c] <==
	* W0524 19:21:44.906837       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0524 19:21:44.910733       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0524 19:21:44.910747       1 server_others.go:186] Using iptables Proxier.
	I0524 19:21:44.910904       1 server.go:583] Version: v1.18.20
	I0524 19:21:44.914547       1 config.go:315] Starting service config controller
	I0524 19:21:44.914571       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0524 19:21:44.914703       1 config.go:133] Starting endpoints config controller
	I0524 19:21:44.914719       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0524 19:21:45.020520       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0524 19:21:45.020520       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [aa13ce9adc81] <==
	* W0524 19:21:25.472817       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 19:21:25.472826       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0524 19:21:25.472832       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0524 19:21:25.483865       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0524 19:21:25.483915       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0524 19:21:25.484786       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0524 19:21:25.484895       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:21:25.484922       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:21:25.484953       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0524 19:21:25.487836       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 19:21:25.487955       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:21:25.488008       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 19:21:25.488061       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 19:21:25.488109       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 19:21:25.488158       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 19:21:25.488346       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 19:21:25.488389       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 19:21:25.488369       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0524 19:21:25.488462       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 19:21:25.488491       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0524 19:21:25.488530       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0524 19:21:26.409066       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:21:26.429550       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0524 19:21:26.516302       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0524 19:21:26.887535       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:20:56 UTC, ends at Wed 2023-05-24 19:23:06 UTC. --
	May 24 19:22:42 ingress-addon-legacy-607000 kubelet[2781]: E0524 19:22:42.172115    2781 pod_workers.go:191] Error syncing pod 8aea39d8-b35b-4658-9933-4be6f2ca82b4 ("hello-world-app-5f5d8b66bb-7wcxd_default(8aea39d8-b35b-4658-9933-4be6f2ca82b4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7wcxd_default(8aea39d8-b35b-4658-9933-4be6f2ca82b4)"
	May 24 19:22:43 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:43.324342    2781 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 84c29c712d3128ddabf1569107d021d982e7f387ca6a0186fe98e9fa21bcd025
	May 24 19:22:43 ingress-addon-legacy-607000 kubelet[2781]: E0524 19:22:43.325102    2781 pod_workers.go:191] Error syncing pod c5524b44-fd85-4b5e-95b2-02f92225a734 ("kube-ingress-dns-minikube_kube-system(c5524b44-fd85-4b5e-95b2-02f92225a734)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(c5524b44-fd85-4b5e-95b2-02f92225a734)"
	May 24 19:22:52 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:52.408436    2781 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-fhcf5" (UniqueName: "kubernetes.io/secret/c5524b44-fd85-4b5e-95b2-02f92225a734-minikube-ingress-dns-token-fhcf5") pod "c5524b44-fd85-4b5e-95b2-02f92225a734" (UID: "c5524b44-fd85-4b5e-95b2-02f92225a734")
	May 24 19:22:52 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:52.411157    2781 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5524b44-fd85-4b5e-95b2-02f92225a734-minikube-ingress-dns-token-fhcf5" (OuterVolumeSpecName: "minikube-ingress-dns-token-fhcf5") pod "c5524b44-fd85-4b5e-95b2-02f92225a734" (UID: "c5524b44-fd85-4b5e-95b2-02f92225a734"). InnerVolumeSpecName "minikube-ingress-dns-token-fhcf5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 24 19:22:52 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:52.508654    2781 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-fhcf5" (UniqueName: "kubernetes.io/secret/c5524b44-fd85-4b5e-95b2-02f92225a734-minikube-ingress-dns-token-fhcf5") on node "ingress-addon-legacy-607000" DevicePath ""
	May 24 19:22:53 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:53.333098    2781 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 84c29c712d3128ddabf1569107d021d982e7f387ca6a0186fe98e9fa21bcd025
	May 24 19:22:56 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:56.325484    2781 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b86309c7dab530cf983fc979133cc6e7baeff0dd64f299f19c339ca9d0fd9618
	May 24 19:22:56 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:22:56.396994    2781 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-7wcxd through plugin: invalid network status for
	May 24 19:22:56 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:22:56.460676    2781 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod8aea39d8-b35b-4658-9933-4be6f2ca82b4/28dbc97ccc0e4e64bcc8770c19c227ab8e26c5ae1e727e5cef8b3c65d4a81686": none of the resources are being tracked.
	May 24 19:22:57 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:22:57.463078    2781 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-7wcxd through plugin: invalid network status for
	May 24 19:22:57 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:57.468636    2781 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b86309c7dab530cf983fc979133cc6e7baeff0dd64f299f19c339ca9d0fd9618
	May 24 19:22:57 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:22:57.469128    2781 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 28dbc97ccc0e4e64bcc8770c19c227ab8e26c5ae1e727e5cef8b3c65d4a81686
	May 24 19:22:57 ingress-addon-legacy-607000 kubelet[2781]: E0524 19:22:57.469652    2781 pod_workers.go:191] Error syncing pod 8aea39d8-b35b-4658-9933-4be6f2ca82b4 ("hello-world-app-5f5d8b66bb-7wcxd_default(8aea39d8-b35b-4658-9933-4be6f2ca82b4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7wcxd_default(8aea39d8-b35b-4658-9933-4be6f2ca82b4)"
	May 24 19:22:58 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:22:58.472052    2781 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-7wcxd through plugin: invalid network status for
	May 24 19:22:58 ingress-addon-legacy-607000 kubelet[2781]: E0524 19:22:58.802857    2781 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hvc67.17622ab7a37caa72", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hvc67", UID:"de6222ad-6ead-4f65-b3ab-f805906beb01", APIVersion:"v1", ResourceVersion:"444", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-607000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc113b6e4afb27672, ext:90954226827, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc113b6e4afb27672, ext:90954226827, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hvc67.17622ab7a37caa72" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 24 19:22:58 ingress-addon-legacy-607000 kubelet[2781]: E0524 19:22:58.818989    2781 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hvc67.17622ab7a37caa72", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hvc67", UID:"de6222ad-6ead-4f65-b3ab-f805906beb01", APIVersion:"v1", ResourceVersion:"444", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-607000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc113b6e4afb27672, ext:90954226827, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc113b6e4b0548b19, ext:90964848988, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hvc67.17622ab7a37caa72" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.015049    2781 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-pxgbp" (UniqueName: "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-ingress-nginx-token-pxgbp") pod "de6222ad-6ead-4f65-b3ab-f805906beb01" (UID: "de6222ad-6ead-4f65-b3ab-f805906beb01")
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.015072    2781 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-webhook-cert") pod "de6222ad-6ead-4f65-b3ab-f805906beb01" (UID: "de6222ad-6ead-4f65-b3ab-f805906beb01")
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.019708    2781 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-ingress-nginx-token-pxgbp" (OuterVolumeSpecName: "ingress-nginx-token-pxgbp") pod "de6222ad-6ead-4f65-b3ab-f805906beb01" (UID: "de6222ad-6ead-4f65-b3ab-f805906beb01"). InnerVolumeSpecName "ingress-nginx-token-pxgbp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.019891    2781 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "de6222ad-6ead-4f65-b3ab-f805906beb01" (UID: "de6222ad-6ead-4f65-b3ab-f805906beb01"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.115266    2781 reconciler.go:319] Volume detached for volume "ingress-nginx-token-pxgbp" (UniqueName: "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-ingress-nginx-token-pxgbp") on node "ingress-addon-legacy-607000" DevicePath ""
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: I0524 19:23:01.115287    2781 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de6222ad-6ead-4f65-b3ab-f805906beb01-webhook-cert") on node "ingress-addon-legacy-607000" DevicePath ""
	May 24 19:23:01 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:23:01.543281    2781 pod_container_deletor.go:77] Container "65482952e4e1659e0a8af3c4fc2d3d473233a894a1c18352e7c0dc594817ae1e" not found in pod's containers
	May 24 19:23:02 ingress-addon-legacy-607000 kubelet[2781]: W0524 19:23:02.348766    2781 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/de6222ad-6ead-4f65-b3ab-f805906beb01/volumes" does not exist
	
	* 
	* ==> storage-provisioner [edcf434c4a38] <==
	* I0524 19:21:46.689211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0524 19:21:46.694155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0524 19:21:46.694202       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0524 19:21:46.698421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0524 19:21:46.698812       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-607000_d871c7c1-2893-46e1-bbdc-72d792fe16d9!
	I0524 19:21:46.699545       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c87dbad-f8a7-4298-8d82-53f480070bb3", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-607000_d871c7c1-2893-46e1-bbdc-72d792fe16d9 became leader
	I0524 19:21:46.799367       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-607000_d871c7c1-2893-46e1-bbdc-72d792fe16d9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-607000 -n ingress-addon-legacy-607000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-607000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-721000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-721000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.247226792s)

                                                
                                                
-- stdout --
	* [mount-start-1-721000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-721000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-721000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-721000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-721000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-721000 -n mount-start-1-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-721000 -n mount-start-1-721000: exit status 7 (66.58025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-721000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.738599667s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:25:55.227171    3405 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:25:55.227338    3405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:25:55.227341    3405 out.go:309] Setting ErrFile to fd 2...
	I0524 12:25:55.227343    3405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:25:55.227425    3405 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:25:55.228671    3405 out.go:303] Setting JSON to false
	I0524 12:25:55.243987    3405 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3326,"bootTime":1684953029,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:25:55.244041    3405 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:25:55.249976    3405 out.go:177] * [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:25:55.257865    3405 notify.go:220] Checking for updates...
	I0524 12:25:55.260836    3405 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:25:55.264905    3405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:25:55.267944    3405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:25:55.270891    3405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:25:55.274808    3405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:25:55.277894    3405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:25:55.280856    3405 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:25:55.284761    3405 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:25:55.290762    3405 start.go:295] selected driver: qemu2
	I0524 12:25:55.290768    3405 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:25:55.290775    3405 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:25:55.292739    3405 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:25:55.295803    3405 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:25:55.298979    3405 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:25:55.298999    3405 cni.go:84] Creating CNI manager for ""
	I0524 12:25:55.299004    3405 cni.go:136] 0 nodes found, recommending kindnet
	I0524 12:25:55.299015    3405 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0524 12:25:55.299021    3405 start_flags.go:319] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:25:55.299105    3405 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:25:55.306835    3405 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0524 12:25:55.310898    3405 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:25:55.310920    3405 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:25:55.310931    3405 cache.go:57] Caching tarball of preloaded images
	I0524 12:25:55.311000    3405 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:25:55.311005    3405 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:25:55.311207    3405 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/multinode-636000/config.json ...
	I0524 12:25:55.311221    3405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/multinode-636000/config.json: {Name:mk495f9057bdf0b9ad1b3814ad89a31f1c7604a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:25:55.311432    3405 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:25:55.311447    3405 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:25:55.311478    3405 start.go:368] acquired machines lock for "multinode-636000" in 25.25µs
	I0524 12:25:55.311494    3405 start.go:93] Provisioning new machine with config: &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:25:55.311524    3405 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:25:55.318898    3405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:25:55.335639    3405 start.go:159] libmachine.API.Create for "multinode-636000" (driver="qemu2")
	I0524 12:25:55.335658    3405 client.go:168] LocalClient.Create starting
	I0524 12:25:55.335716    3405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:25:55.335744    3405 main.go:141] libmachine: Decoding PEM data...
	I0524 12:25:55.335759    3405 main.go:141] libmachine: Parsing certificate...
	I0524 12:25:55.335803    3405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:25:55.335821    3405 main.go:141] libmachine: Decoding PEM data...
	I0524 12:25:55.335833    3405 main.go:141] libmachine: Parsing certificate...
	I0524 12:25:55.336232    3405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:25:55.455727    3405 main.go:141] libmachine: Creating SSH key...
	I0524 12:25:55.550629    3405 main.go:141] libmachine: Creating Disk image...
	I0524 12:25:55.550635    3405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:25:55.550782    3405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:25:55.559319    3405 main.go:141] libmachine: STDOUT: 
	I0524 12:25:55.559334    3405 main.go:141] libmachine: STDERR: 
	I0524 12:25:55.559377    3405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2 +20000M
	I0524 12:25:55.566459    3405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:25:55.566472    3405 main.go:141] libmachine: STDERR: 
	I0524 12:25:55.566486    3405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:25:55.566494    3405 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:25:55.566530    3405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f6:4d:09:0e:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:25:55.568018    3405 main.go:141] libmachine: STDOUT: 
	I0524 12:25:55.568032    3405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:25:55.568048    3405 client.go:171] LocalClient.Create took 232.387459ms
	I0524 12:25:57.570206    3405 start.go:128] duration metric: createHost completed in 2.25868075s
	I0524 12:25:57.570272    3405 start.go:83] releasing machines lock for "multinode-636000", held for 2.258807375s
	W0524 12:25:57.570324    3405 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:25:57.577923    3405 out.go:177] * Deleting "multinode-636000" in qemu2 ...
	W0524 12:25:57.598438    3405 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:25:57.598468    3405 start.go:702] Will try again in 5 seconds ...
	I0524 12:26:02.600656    3405 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:26:02.601229    3405 start.go:368] acquired machines lock for "multinode-636000" in 464.625µs
	I0524 12:26:02.601351    3405 start.go:93] Provisioning new machine with config: &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:26:02.601702    3405 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:26:02.609409    3405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:26:02.653577    3405 start.go:159] libmachine.API.Create for "multinode-636000" (driver="qemu2")
	I0524 12:26:02.653626    3405 client.go:168] LocalClient.Create starting
	I0524 12:26:02.653739    3405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:26:02.653781    3405 main.go:141] libmachine: Decoding PEM data...
	I0524 12:26:02.653811    3405 main.go:141] libmachine: Parsing certificate...
	I0524 12:26:02.653882    3405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:26:02.653909    3405 main.go:141] libmachine: Decoding PEM data...
	I0524 12:26:02.653923    3405 main.go:141] libmachine: Parsing certificate...
	I0524 12:26:02.654445    3405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:26:02.837869    3405 main.go:141] libmachine: Creating SSH key...
	I0524 12:26:02.882250    3405 main.go:141] libmachine: Creating Disk image...
	I0524 12:26:02.882255    3405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:26:02.882408    3405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:26:02.890883    3405 main.go:141] libmachine: STDOUT: 
	I0524 12:26:02.890899    3405 main.go:141] libmachine: STDERR: 
	I0524 12:26:02.890944    3405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2 +20000M
	I0524 12:26:02.898019    3405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:26:02.898032    3405 main.go:141] libmachine: STDERR: 
	I0524 12:26:02.898045    3405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:26:02.898050    3405 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:26:02.898092    3405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8a:95:56:7b:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:26:02.899643    3405 main.go:141] libmachine: STDOUT: 
	I0524 12:26:02.899659    3405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:26:02.899673    3405 client.go:171] LocalClient.Create took 246.044ms
	I0524 12:26:04.901777    3405 start.go:128] duration metric: createHost completed in 2.300069875s
	I0524 12:26:04.901824    3405 start.go:83] releasing machines lock for "multinode-636000", held for 2.300595583s
	W0524 12:26:04.902237    3405 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:26:04.910539    3405 out.go:177] 
	W0524 12:26:04.914669    3405 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:26:04.914689    3405 out.go:239] * 
	* 
	W0524 12:26:04.916985    3405 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:26:04.925622    3405 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (65.319375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (102.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (121.659875ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-636000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- rollout status deployment/busybox: exit status 1 (53.89625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (53.811375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.555542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.739125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.901959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.392875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.647875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.200667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.047ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0524 12:26:45.416492    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.778667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0524 12:27:10.354865    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.361232    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.373336    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.395445    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.436083    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.518195    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:10.680366    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:11.002494    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:11.644842    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:12.927170    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:15.489645    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.79325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0524 12:27:20.611977    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:27:30.854389    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.166583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.099084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.704208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default: exit status 1 (52.632375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.91725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.207791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (102.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.124416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.825167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-636000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-636000 -v 3 --alsologtostderr: exit status 89 (40.551709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-636000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:47.422502    3490 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:47.422711    3490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.422714    3490 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:47.422716    3490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.422787    3490 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:47.423014    3490 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:47.423201    3490 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:47.427773    3490 out.go:177] * The control plane node must be running for this command
	I0524 12:27:47.431901    3490 out.go:177]   To start a cluster, run: "minikube start -p multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-636000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.808458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-636000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-636000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-636000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.2\",\"ClusterName\":\"multinode-636000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (28.320292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status --output json --alsologtostderr: exit status 7 (28.256875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-636000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:47.597430    3500 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:47.597573    3500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.597576    3500 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:47.597579    3500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.597658    3500 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:47.597783    3500 out.go:303] Setting JSON to true
	I0524 12:27:47.597795    3500 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:47.597860    3500 notify.go:220] Checking for updates...
	I0524 12:27:47.597985    3500 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:47.597991    3500 status.go:255] checking status of multinode-636000 ...
	I0524 12:27:47.598160    3500 status.go:330] multinode-636000 host status = "Stopped" (err=<nil>)
	I0524 12:27:47.598163    3500 status.go:343] host is not running, skipping remaining checks
	I0524 12:27:47.598165    3500 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-636000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.266042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 node stop m03: exit status 85 (42.99075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-636000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status: exit status 7 (27.746291ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr: exit status 7 (28.142792ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:47.724417    3508 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:47.724566    3508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.724569    3508 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:47.724571    3508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.724645    3508 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:47.724760    3508 out.go:303] Setting JSON to false
	I0524 12:27:47.724774    3508 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:47.724830    3508 notify.go:220] Checking for updates...
	I0524 12:27:47.724962    3508 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:47.724967    3508 status.go:255] checking status of multinode-636000 ...
	I0524 12:27:47.725155    3508 status.go:330] multinode-636000 host status = "Stopped" (err=<nil>)
	I0524 12:27:47.725159    3508 status.go:343] host is not running, skipping remaining checks
	I0524 12:27:47.725161    3508 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.78025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 node start m03 --alsologtostderr: exit status 85 (42.67375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:47.780281    3512 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:47.780486    3512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.780489    3512 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:47.780491    3512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.780559    3512 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:47.780779    3512 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:47.780941    3512 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:47.784978    3512 out.go:177] 
	W0524 12:27:47.788079    3512 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0524 12:27:47.788083    3512 out.go:239] * 
	* 
	W0524 12:27:47.789696    3512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:27:47.792947    3512 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0524 12:27:47.780281    3512 out.go:296] Setting OutFile to fd 1 ...
I0524 12:27:47.780486    3512 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:27:47.780489    3512 out.go:309] Setting ErrFile to fd 2...
I0524 12:27:47.780491    3512 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:27:47.780559    3512 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:27:47.780779    3512 mustload.go:65] Loading cluster: multinode-636000
I0524 12:27:47.780941    3512 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:27:47.784978    3512 out.go:177] 
W0524 12:27:47.788079    3512 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0524 12:27:47.788083    3512 out.go:239] * 
* 
W0524 12:27:47.789696    3512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0524 12:27:47.792947    3512 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-636000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status: exit status 7 (27.677625ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-636000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (28.322167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-636000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-636000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr
E0524 12:27:50.571822    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:27:51.336722    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.175637625s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Restarting existing qemu2 VM for "multinode-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:47.964434    3522 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:47.964544    3522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.964547    3522 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:47.964549    3522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:47.964624    3522 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:47.965550    3522 out.go:303] Setting JSON to false
	I0524 12:27:47.980822    3522 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3438,"bootTime":1684953029,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:27:47.980880    3522 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:27:47.986002    3522 out.go:177] * [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:27:47.993045    3522 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:27:47.993065    3522 notify.go:220] Checking for updates...
	I0524 12:27:48.000081    3522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:27:48.004005    3522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:27:48.006989    3522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:27:48.009964    3522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:27:48.013011    3522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:27:48.014599    3522 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:48.014620    3522 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:27:48.018029    3522 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:27:48.024826    3522 start.go:295] selected driver: qemu2
	I0524 12:27:48.024839    3522 start.go:870] validating driver "qemu2" against &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:27:48.024907    3522 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:27:48.026947    3522 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:27:48.026968    3522 cni.go:84] Creating CNI manager for ""
	I0524 12:27:48.026973    3522 cni.go:136] 1 nodes found, recommending kindnet
	I0524 12:27:48.026980    3522 start_flags.go:319] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:27:48.027053    3522 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:27:48.034940    3522 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0524 12:27:48.038853    3522 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:27:48.038873    3522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:27:48.038881    3522 cache.go:57] Caching tarball of preloaded images
	I0524 12:27:48.038936    3522 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:27:48.038941    3522 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:27:48.039004    3522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/multinode-636000/config.json ...
	I0524 12:27:48.039324    3522 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:27:48.039336    3522 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:27:48.039364    3522 start.go:368] acquired machines lock for "multinode-636000" in 23.375µs
	I0524 12:27:48.039374    3522 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:27:48.039378    3522 fix.go:55] fixHost starting: 
	I0524 12:27:48.039502    3522 fix.go:103] recreateIfNeeded on multinode-636000: state=Stopped err=<nil>
	W0524 12:27:48.039511    3522 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:27:48.046925    3522 out.go:177] * Restarting existing qemu2 VM for "multinode-636000" ...
	I0524 12:27:48.051098    3522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8a:95:56:7b:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:27:48.053015    3522 main.go:141] libmachine: STDOUT: 
	I0524 12:27:48.053030    3522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:27:48.053055    3522 fix.go:57] fixHost completed within 13.675958ms
	I0524 12:27:48.053059    3522 start.go:83] releasing machines lock for "multinode-636000", held for 13.691083ms
	W0524 12:27:48.053065    3522 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:27:48.053144    3522 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:27:48.053148    3522 start.go:702] Will try again in 5 seconds ...
	I0524 12:27:53.055263    3522 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:27:53.055628    3522 start.go:368] acquired machines lock for "multinode-636000" in 289.209µs
	I0524 12:27:53.055821    3522 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:27:53.055844    3522 fix.go:55] fixHost starting: 
	I0524 12:27:53.056579    3522 fix.go:103] recreateIfNeeded on multinode-636000: state=Stopped err=<nil>
	W0524 12:27:53.056606    3522 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:27:53.062410    3522 out.go:177] * Restarting existing qemu2 VM for "multinode-636000" ...
	I0524 12:27:53.066486    3522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8a:95:56:7b:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:27:53.075924    3522 main.go:141] libmachine: STDOUT: 
	I0524 12:27:53.075993    3522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:27:53.076108    3522 fix.go:57] fixHost completed within 20.264833ms
	I0524 12:27:53.076132    3522 start.go:83] releasing machines lock for "multinode-636000", held for 20.477708ms
	W0524 12:27:53.076705    3522 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:27:53.084845    3522 out.go:177] 
	W0524 12:27:53.089543    3522 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:27:53.089604    3522 out.go:239] * 
	* 
	W0524 12:27:53.091895    3522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:27:53.101321    3522 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-636000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-636000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (33.026292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 node delete m03: exit status 89 (38.303833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-636000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-636000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr: exit status 7 (28.671375ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:53.279635    3535 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:53.279754    3535 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.279756    3535 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:53.279761    3535 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.279829    3535 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:53.279936    3535 out.go:303] Setting JSON to false
	I0524 12:27:53.279947    3535 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:53.279999    3535 notify.go:220] Checking for updates...
	I0524 12:27:53.280488    3535 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:53.280495    3535 status.go:255] checking status of multinode-636000 ...
	I0524 12:27:53.281043    3535 status.go:330] multinode-636000 host status = "Stopped" (err=<nil>)
	I0524 12:27:53.281051    3535 status.go:343] host is not running, skipping remaining checks
	I0524 12:27:53.281053    3535 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.881625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status: exit status 7 (28.384584ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr: exit status 7 (27.947042ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:53.425227    3543 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:53.425359    3543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.425361    3543 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:53.425364    3543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.425432    3543 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:53.425539    3543 out.go:303] Setting JSON to false
	I0524 12:27:53.425557    3543 mustload.go:65] Loading cluster: multinode-636000
	I0524 12:27:53.425604    3543 notify.go:220] Checking for updates...
	I0524 12:27:53.425741    3543 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:53.425747    3543 status.go:255] checking status of multinode-636000 ...
	I0524 12:27:53.425914    3543 status.go:330] multinode-636000 host status = "Stopped" (err=<nil>)
	I0524 12:27:53.425918    3543 status.go:343] host is not running, skipping remaining checks
	I0524 12:27:53.425920    3543 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (27.833542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.17190775s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Restarting existing qemu2 VM for "multinode-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:27:53.480660    3547 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:27:53.480807    3547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.480809    3547 out.go:309] Setting ErrFile to fd 2...
	I0524 12:27:53.480812    3547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:27:53.480900    3547 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:27:53.481885    3547 out.go:303] Setting JSON to false
	I0524 12:27:53.497028    3547 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3444,"bootTime":1684953029,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:27:53.497092    3547 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:27:53.502361    3547 out.go:177] * [multinode-636000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:27:53.509343    3547 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:27:53.509380    3547 notify.go:220] Checking for updates...
	I0524 12:27:53.517291    3547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:27:53.521169    3547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:27:53.525339    3547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:27:53.528384    3547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:27:53.529699    3547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:27:53.532641    3547 config.go:182] Loaded profile config "multinode-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:27:53.532869    3547 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:27:53.536300    3547 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:27:53.541341    3547 start.go:295] selected driver: qemu2
	I0524 12:27:53.541347    3547 start.go:870] validating driver "qemu2" against &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:27:53.541389    3547 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:27:53.543406    3547 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:27:53.543429    3547 cni.go:84] Creating CNI manager for ""
	I0524 12:27:53.543434    3547 cni.go:136] 1 nodes found, recommending kindnet
	I0524 12:27:53.543444    3547 start_flags.go:319] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-636000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:27:53.543527    3547 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:27:53.552292    3547 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0524 12:27:53.556345    3547 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:27:53.556363    3547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:27:53.556382    3547 cache.go:57] Caching tarball of preloaded images
	I0524 12:27:53.556431    3547 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:27:53.556444    3547 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:27:53.556497    3547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/multinode-636000/config.json ...
	I0524 12:27:53.556859    3547 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:27:53.556871    3547 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:27:53.556896    3547 start.go:368] acquired machines lock for "multinode-636000" in 19.959µs
	I0524 12:27:53.556905    3547 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:27:53.556909    3547 fix.go:55] fixHost starting: 
	I0524 12:27:53.557022    3547 fix.go:103] recreateIfNeeded on multinode-636000: state=Stopped err=<nil>
	W0524 12:27:53.557030    3547 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:27:53.565307    3547 out.go:177] * Restarting existing qemu2 VM for "multinode-636000" ...
	I0524 12:27:53.569394    3547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8a:95:56:7b:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:27:53.571262    3547 main.go:141] libmachine: STDOUT: 
	I0524 12:27:53.571278    3547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:27:53.571307    3547 fix.go:57] fixHost completed within 14.396625ms
	I0524 12:27:53.571312    3547 start.go:83] releasing machines lock for "multinode-636000", held for 14.411625ms
	W0524 12:27:53.571319    3547 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:27:53.571381    3547 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:27:53.571385    3547 start.go:702] Will try again in 5 seconds ...
	I0524 12:27:58.573467    3547 start.go:364] acquiring machines lock for multinode-636000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:27:58.573770    3547 start.go:368] acquired machines lock for "multinode-636000" in 223.5µs
	I0524 12:27:58.573893    3547 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:27:58.573912    3547 fix.go:55] fixHost starting: 
	I0524 12:27:58.574612    3547 fix.go:103] recreateIfNeeded on multinode-636000: state=Stopped err=<nil>
	W0524 12:27:58.574637    3547 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:27:58.579224    3547 out.go:177] * Restarting existing qemu2 VM for "multinode-636000" ...
	I0524 12:27:58.583278    3547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8a:95:56:7b:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/multinode-636000/disk.qcow2
	I0524 12:27:58.591322    3547 main.go:141] libmachine: STDOUT: 
	I0524 12:27:58.591397    3547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:27:58.591471    3547 fix.go:57] fixHost completed within 17.559167ms
	I0524 12:27:58.591499    3547 start.go:83] releasing machines lock for "multinode-636000", held for 17.701958ms
	W0524 12:27:58.591999    3547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:27:58.599158    3547 out.go:177] 
	W0524 12:27:58.603306    3547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:27:58.603345    3547 out.go:239] * 
	* 
	W0524 12:27:58.605874    3547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:27:58.614115    3547 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (67.299083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-636000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-636000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-636000-m01 --driver=qemu2 : exit status 80 (9.767516875s)

                                                
                                                
-- stdout --
	* [multinode-636000-m01] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-636000-m01 in cluster multinode-636000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-636000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-636000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-636000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-636000-m02 --driver=qemu2 : exit status 80 (9.763027083s)

                                                
                                                
-- stdout --
	* [multinode-636000-m02] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-636000-m02 in cluster multinode-636000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-636000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-636000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-636000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-636000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-636000: exit status 89 (78.299042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-636000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-636000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (29.193417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.78s)

                                                
                                    
x
+
TestPreload (9.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-319000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-319000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.707959958s)

                                                
                                                
-- stdout --
	* [test-preload-319000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-319000 in cluster test-preload-319000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-319000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:28:18.631224    3602 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:28:18.631355    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:28:18.631358    3602 out.go:309] Setting ErrFile to fd 2...
	I0524 12:28:18.631360    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:28:18.631436    3602 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:28:18.632510    3602 out.go:303] Setting JSON to false
	I0524 12:28:18.647919    3602 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3469,"bootTime":1684953029,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:28:18.647987    3602 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:28:18.653552    3602 out.go:177] * [test-preload-319000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:28:18.661528    3602 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:28:18.661587    3602 notify.go:220] Checking for updates...
	I0524 12:28:18.668420    3602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:28:18.671617    3602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:28:18.675613    3602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:28:18.676980    3602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:28:18.679601    3602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:28:18.682927    3602 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:28:18.682951    3602 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:28:18.687383    3602 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:28:18.694632    3602 start.go:295] selected driver: qemu2
	I0524 12:28:18.694640    3602 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:28:18.694648    3602 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:28:18.697611    3602 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:28:18.700591    3602 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:28:18.703593    3602 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:28:18.703614    3602 cni.go:84] Creating CNI manager for ""
	I0524 12:28:18.703622    3602 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:28:18.703629    3602 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:28:18.703636    3602 start_flags.go:319] config:
	{Name:test-preload-319000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-319000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:28:18.703705    3602 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.712570    3602 out.go:177] * Starting control plane node test-preload-319000 in cluster test-preload-319000
	I0524 12:28:18.716590    3602 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0524 12:28:18.716666    3602 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/test-preload-319000/config.json ...
	I0524 12:28:18.716689    3602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/test-preload-319000/config.json: {Name:mk444e223aa5f94152f8b70b84c4e3c81d39e6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:28:18.716686    3602 cache.go:107] acquiring lock: {Name:mke78f65591f4bd38f67f73acc96d4c58657a3a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716694    3602 cache.go:107] acquiring lock: {Name:mk83f238773193a6319b253ac5914ab99560dbf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716711    3602 cache.go:107] acquiring lock: {Name:mka38e647c35d13c39fa1202f236d347f2cc53fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716854    3602 cache.go:107] acquiring lock: {Name:mkeab5ead5f1bccb5c90c0d316785d0290f59abb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716920    3602 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:28:18.716929    3602 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:28:18.716943    3602 start.go:364] acquiring machines lock for test-preload-319000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:28:18.716932    3602 cache.go:107] acquiring lock: {Name:mk08ab2acd9bf72003e356be6f2724e1194f99d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716945    3602 cache.go:107] acquiring lock: {Name:mk1b17bd980fdc5f1c2f0b29b31732190e058192 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716948    3602 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0524 12:28:18.716978    3602 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0524 12:28:18.716953    3602 cache.go:107] acquiring lock: {Name:mk6cb242d14ab6457a3c603dd1b60a830ee7a9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716989    3602 cache.go:107] acquiring lock: {Name:mkcc4e29da4516ac174866d7cab4de008e2e7dd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:28:18.716973    3602 start.go:368] acquired machines lock for "test-preload-319000" in 25.125µs
	I0524 12:28:18.716951    3602 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 12:28:18.717094    3602 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0524 12:28:18.717056    3602 start.go:93] Provisioning new machine with config: &{Name:test-preload-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-319000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:28:18.717146    3602 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:28:18.717170    3602 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0524 12:28:18.724536    3602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:28:18.717270    3602 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0524 12:28:18.717297    3602 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0524 12:28:18.740862    3602 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 12:28:18.741030    3602 start.go:159] libmachine.API.Create for "test-preload-319000" (driver="qemu2")
	I0524 12:28:18.741424    3602 client.go:168] LocalClient.Create starting
	I0524 12:28:18.741596    3602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:28:18.741634    3602 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:18.741658    3602 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:18.741688    3602 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0524 12:28:18.741737    3602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:28:18.741752    3602 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:18.741764    3602 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:18.742163    3602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:28:18.743134    3602 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0524 12:28:18.745159    3602 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 12:28:18.745357    3602 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0524 12:28:18.746333    3602 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0524 12:28:18.747050    3602 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0524 12:28:18.747349    3602 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0524 12:28:18.857572    3602 main.go:141] libmachine: Creating SSH key...
	I0524 12:28:18.916298    3602 main.go:141] libmachine: Creating Disk image...
	I0524 12:28:18.916308    3602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:28:18.916454    3602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:18.925193    3602 main.go:141] libmachine: STDOUT: 
	I0524 12:28:18.925219    3602 main.go:141] libmachine: STDERR: 
	I0524 12:28:18.925282    3602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2 +20000M
	I0524 12:28:18.933087    3602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:28:18.933102    3602 main.go:141] libmachine: STDERR: 
	I0524 12:28:18.933124    3602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:18.933130    3602 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:28:18.933166    3602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:57:1d:ea:dd:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:18.934869    3602 main.go:141] libmachine: STDOUT: 
	I0524 12:28:18.934883    3602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:28:18.934900    3602 client.go:171] LocalClient.Create took 193.452ms
	W0524 12:28:19.692543    3602 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0524 12:28:19.692569    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0524 12:28:19.887438    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0524 12:28:19.887461    3602 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.170780083s
	I0524 12:28:19.887491    3602 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0524 12:28:20.225079    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0524 12:28:20.273730    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0524 12:28:20.280909    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0524 12:28:20.396654    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0524 12:28:20.396675    3602 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.679903417s
	I0524 12:28:20.396683    3602 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0524 12:28:20.448232    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0524 12:28:20.555663    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0524 12:28:20.687653    3602 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0524 12:28:20.687698    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0524 12:28:20.893040    3602 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0524 12:28:20.935048    3602 start.go:128] duration metric: createHost completed in 2.217902459s
	I0524 12:28:20.935087    3602 start.go:83] releasing machines lock for "test-preload-319000", held for 2.218080834s
	W0524 12:28:20.935147    3602 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:28:20.945237    3602 out.go:177] * Deleting "test-preload-319000" in qemu2 ...
	W0524 12:28:20.964743    3602 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:28:20.964772    3602 start.go:702] Will try again in 5 seconds ...
	I0524 12:28:21.725825    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0524 12:28:21.725877    3602 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.009066917s
	I0524 12:28:21.725902    3602 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0524 12:28:22.655998    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0524 12:28:22.656069    3602 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.939415959s
	I0524 12:28:22.656104    3602 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0524 12:28:23.392616    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0524 12:28:23.392667    3602 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.675781625s
	I0524 12:28:23.392695    3602 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0524 12:28:24.494191    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0524 12:28:24.494240    3602 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.777627833s
	I0524 12:28:24.494265    3602 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0524 12:28:25.226667    3602 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0524 12:28:25.226710    3602 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.509817s
	I0524 12:28:25.226737    3602 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0524 12:28:25.964933    3602 start.go:364] acquiring machines lock for test-preload-319000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:28:25.965397    3602 start.go:368] acquired machines lock for "test-preload-319000" in 397.167µs
	I0524 12:28:25.965489    3602 start.go:93] Provisioning new machine with config: &{Name:test-preload-319000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-319000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:28:25.965790    3602 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:28:25.975653    3602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:28:26.023190    3602 start.go:159] libmachine.API.Create for "test-preload-319000" (driver="qemu2")
	I0524 12:28:26.023234    3602 client.go:168] LocalClient.Create starting
	I0524 12:28:26.023366    3602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:28:26.023419    3602 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:26.023441    3602 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:26.023526    3602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:28:26.023555    3602 main.go:141] libmachine: Decoding PEM data...
	I0524 12:28:26.023570    3602 main.go:141] libmachine: Parsing certificate...
	I0524 12:28:26.024112    3602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:28:26.147173    3602 main.go:141] libmachine: Creating SSH key...
	I0524 12:28:26.254863    3602 main.go:141] libmachine: Creating Disk image...
	I0524 12:28:26.254869    3602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:28:26.255025    3602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:26.263940    3602 main.go:141] libmachine: STDOUT: 
	I0524 12:28:26.263956    3602 main.go:141] libmachine: STDERR: 
	I0524 12:28:26.264008    3602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2 +20000M
	I0524 12:28:26.271282    3602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:28:26.271294    3602 main.go:141] libmachine: STDERR: 
	I0524 12:28:26.271313    3602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:26.271318    3602 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:28:26.271357    3602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:0c:31:e9:88:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/test-preload-319000/disk.qcow2
	I0524 12:28:26.272975    3602 main.go:141] libmachine: STDOUT: 
	I0524 12:28:26.272991    3602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:28:26.273005    3602 client.go:171] LocalClient.Create took 249.768208ms
	I0524 12:28:28.275167    3602 start.go:128] duration metric: createHost completed in 2.309350542s
	I0524 12:28:28.275257    3602 start.go:83] releasing machines lock for "test-preload-319000", held for 2.30985975s
	W0524 12:28:28.275757    3602 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:28:28.283167    3602 out.go:177] 
	W0524 12:28:28.288167    3602 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:28:28.288191    3602 out.go:239] * 
	* 
	W0524 12:28:28.290596    3602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:28:28.300116    3602 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-319000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-05-24 12:28:28.315194 -0700 PDT m=+3165.184012709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-319000 -n test-preload-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-319000 -n test-preload-319000: exit status 7 (64.594292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-319000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-319000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-319000
--- FAIL: TestPreload (9.88s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-733000 --memory=2048 --driver=qemu2 
E0524 12:28:32.297292    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-733000 --memory=2048 --driver=qemu2 : exit status 80 (9.872859625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-733000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-733000 in cluster scheduled-stop-733000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-733000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-733000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-733000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-733000 in cluster scheduled-stop-733000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-733000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-733000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-05-24 12:28:38.360312 -0700 PDT m=+3175.229229917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-733000 -n scheduled-stop-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-733000 -n scheduled-stop-733000: exit status 7 (66.631209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-733000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-733000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (12.6s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe908738539 version
skaffold_test.go:63: skaffold version: v2.4.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-931000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-931000 --memory=2600 --driver=qemu2 : exit status 80 (10.145415875s)

                                                
                                                
-- stdout --
	* [skaffold-931000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-931000 in cluster skaffold-931000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-931000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-931000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-931000 in cluster skaffold-931000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-931000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-05-24 12:28:50.966013 -0700 PDT m=+3187.835056667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-931000 -n skaffold-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-931000 -n skaffold-931000: exit status 7 (62.01525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-931000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-931000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-931000
--- FAIL: TestSkaffold (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0524 12:29:54.219025    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
E0524 12:32:10.351296    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-24 12:32:15.422884 -0700 PDT m=+3392.293958751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-837000 -n running-upgrade-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-837000 -n running-upgrade-837000: exit status 85 (84.502166ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-837000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-837000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-837000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-837000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-837000\"")
helpers_test.go:175: Cleaning up "running-upgrade-837000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-837000
--- FAIL: TestRunningBinaryUpgrade (164.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.700292458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-192000 in cluster kubernetes-upgrade-192000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-192000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:32:15.792594    4118 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:32:15.792697    4118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:32:15.792701    4118 out.go:309] Setting ErrFile to fd 2...
	I0524 12:32:15.792703    4118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:32:15.792771    4118 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:32:15.793832    4118 out.go:303] Setting JSON to false
	I0524 12:32:15.809013    4118 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3706,"bootTime":1684953029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:32:15.809078    4118 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:32:15.814435    4118 out.go:177] * [kubernetes-upgrade-192000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:32:15.820352    4118 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:32:15.820381    4118 notify.go:220] Checking for updates...
	I0524 12:32:15.826407    4118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:32:15.829350    4118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:32:15.832348    4118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:32:15.835376    4118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:32:15.838366    4118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:32:15.842151    4118 config.go:182] Loaded profile config "cert-expiration-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:32:15.842234    4118 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:32:15.842253    4118 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:32:15.846345    4118 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:32:15.853375    4118 start.go:295] selected driver: qemu2
	I0524 12:32:15.853383    4118 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:32:15.853391    4118 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:32:15.855399    4118 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:32:15.858351    4118 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:32:15.861394    4118 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 12:32:15.861407    4118 cni.go:84] Creating CNI manager for ""
	I0524 12:32:15.861414    4118 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:32:15.861418    4118 start_flags.go:319] config:
	{Name:kubernetes-upgrade-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:32:15.861489    4118 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:32:15.870217    4118 out.go:177] * Starting control plane node kubernetes-upgrade-192000 in cluster kubernetes-upgrade-192000
	I0524 12:32:15.874372    4118 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 12:32:15.874396    4118 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 12:32:15.874406    4118 cache.go:57] Caching tarball of preloaded images
	I0524 12:32:15.874457    4118 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:32:15.874462    4118 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0524 12:32:15.874510    4118 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubernetes-upgrade-192000/config.json ...
	I0524 12:32:15.874528    4118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubernetes-upgrade-192000/config.json: {Name:mkb3da80844bf11e9735c534710b9ebaa1b90e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:32:15.874728    4118 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:32:15.874741    4118 start.go:364] acquiring machines lock for kubernetes-upgrade-192000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:32:15.874769    4118 start.go:368] acquired machines lock for "kubernetes-upgrade-192000" in 23.125µs
	I0524 12:32:15.874782    4118 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:32:15.874817    4118 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:32:15.882314    4118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:32:15.898339    4118 start.go:159] libmachine.API.Create for "kubernetes-upgrade-192000" (driver="qemu2")
	I0524 12:32:15.898367    4118 client.go:168] LocalClient.Create starting
	I0524 12:32:15.898438    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:32:15.898457    4118 main.go:141] libmachine: Decoding PEM data...
	I0524 12:32:15.898467    4118 main.go:141] libmachine: Parsing certificate...
	I0524 12:32:15.898512    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:32:15.898526    4118 main.go:141] libmachine: Decoding PEM data...
	I0524 12:32:15.898535    4118 main.go:141] libmachine: Parsing certificate...
	I0524 12:32:15.898885    4118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:32:16.010608    4118 main.go:141] libmachine: Creating SSH key...
	I0524 12:32:16.090402    4118 main.go:141] libmachine: Creating Disk image...
	I0524 12:32:16.090408    4118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:32:16.090561    4118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:16.099187    4118 main.go:141] libmachine: STDOUT: 
	I0524 12:32:16.099200    4118 main.go:141] libmachine: STDERR: 
	I0524 12:32:16.099269    4118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2 +20000M
	I0524 12:32:16.106415    4118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:32:16.106426    4118 main.go:141] libmachine: STDERR: 
	I0524 12:32:16.106442    4118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:16.106458    4118 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:32:16.106490    4118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a0:5b:55:f1:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:16.107999    4118 main.go:141] libmachine: STDOUT: 
	I0524 12:32:16.108011    4118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:32:16.108039    4118 client.go:171] LocalClient.Create took 209.667583ms
	I0524 12:32:18.110203    4118 start.go:128] duration metric: createHost completed in 2.235390833s
	I0524 12:32:18.110258    4118 start.go:83] releasing machines lock for "kubernetes-upgrade-192000", held for 2.235502541s
	W0524 12:32:18.110313    4118 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:32:18.122876    4118 out.go:177] * Deleting "kubernetes-upgrade-192000" in qemu2 ...
	W0524 12:32:18.141226    4118 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:32:18.141255    4118 start.go:702] Will try again in 5 seconds ...
	I0524 12:32:23.143543    4118 start.go:364] acquiring machines lock for kubernetes-upgrade-192000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:32:23.144105    4118 start.go:368] acquired machines lock for "kubernetes-upgrade-192000" in 439.875µs
	I0524 12:32:23.144215    4118 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:32:23.144509    4118 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:32:23.150435    4118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:32:23.197976    4118 start.go:159] libmachine.API.Create for "kubernetes-upgrade-192000" (driver="qemu2")
	I0524 12:32:23.198023    4118 client.go:168] LocalClient.Create starting
	I0524 12:32:23.198132    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:32:23.198173    4118 main.go:141] libmachine: Decoding PEM data...
	I0524 12:32:23.198190    4118 main.go:141] libmachine: Parsing certificate...
	I0524 12:32:23.198262    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:32:23.198289    4118 main.go:141] libmachine: Decoding PEM data...
	I0524 12:32:23.198310    4118 main.go:141] libmachine: Parsing certificate...
	I0524 12:32:23.198826    4118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:32:23.324470    4118 main.go:141] libmachine: Creating SSH key...
	I0524 12:32:23.408164    4118 main.go:141] libmachine: Creating Disk image...
	I0524 12:32:23.408170    4118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:32:23.408339    4118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:23.416825    4118 main.go:141] libmachine: STDOUT: 
	I0524 12:32:23.416844    4118 main.go:141] libmachine: STDERR: 
	I0524 12:32:23.416896    4118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2 +20000M
	I0524 12:32:23.424008    4118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:32:23.424027    4118 main.go:141] libmachine: STDERR: 
	I0524 12:32:23.424041    4118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:23.424050    4118 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:32:23.424085    4118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c5:a6:a1:00:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:23.425618    4118 main.go:141] libmachine: STDOUT: 
	I0524 12:32:23.425632    4118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:32:23.425644    4118 client.go:171] LocalClient.Create took 227.617958ms
	I0524 12:32:25.427819    4118 start.go:128] duration metric: createHost completed in 2.283279833s
	I0524 12:32:25.427901    4118 start.go:83] releasing machines lock for "kubernetes-upgrade-192000", held for 2.28379575s
	W0524 12:32:25.428642    4118 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-192000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-192000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:32:25.438233    4118 out.go:177] 
	W0524 12:32:25.441439    4118 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:32:25.441502    4118 out.go:239] * 
	* 
	W0524 12:32:25.444207    4118 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:32:25.453150    4118 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-192000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-192000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-192000 status --format={{.Host}}: exit status 7 (28.1715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.176176958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-192000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-192000 in cluster kubernetes-upgrade-192000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-192000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-192000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:32:25.618961    4138 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:32:25.619071    4138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:32:25.619075    4138 out.go:309] Setting ErrFile to fd 2...
	I0524 12:32:25.619077    4138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:32:25.619151    4138 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:32:25.620148    4138 out.go:303] Setting JSON to false
	I0524 12:32:25.635338    4138 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3716,"bootTime":1684953029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:32:25.635401    4138 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:32:25.640404    4138 out.go:177] * [kubernetes-upgrade-192000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:32:25.647397    4138 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:32:25.647459    4138 notify.go:220] Checking for updates...
	I0524 12:32:25.653274    4138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:32:25.657348    4138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:32:25.660373    4138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:32:25.668331    4138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:32:25.671330    4138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:32:25.674535    4138 config.go:182] Loaded profile config "kubernetes-upgrade-192000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0524 12:32:25.674768    4138 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:32:25.679241    4138 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:32:25.686283    4138 start.go:295] selected driver: qemu2
	I0524 12:32:25.686289    4138 start.go:870] validating driver "qemu2" against &{Name:kubernetes-upgrade-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:32:25.686352    4138 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:32:25.688266    4138 cni.go:84] Creating CNI manager for ""
	I0524 12:32:25.688285    4138 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:32:25.688291    4138 start_flags.go:319] config:
	{Name:kubernetes-upgrade-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-192000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:32:25.688369    4138 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:32:25.696274    4138 out.go:177] * Starting control plane node kubernetes-upgrade-192000 in cluster kubernetes-upgrade-192000
	I0524 12:32:25.700113    4138 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:32:25.700150    4138 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:32:25.700165    4138 cache.go:57] Caching tarball of preloaded images
	I0524 12:32:25.700245    4138 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:32:25.700252    4138 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:32:25.700306    4138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubernetes-upgrade-192000/config.json ...
	I0524 12:32:25.700627    4138 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:32:25.700642    4138 start.go:364] acquiring machines lock for kubernetes-upgrade-192000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:32:25.700672    4138 start.go:368] acquired machines lock for "kubernetes-upgrade-192000" in 23.375µs
	I0524 12:32:25.700683    4138 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:32:25.700686    4138 fix.go:55] fixHost starting: 
	I0524 12:32:25.700801    4138 fix.go:103] recreateIfNeeded on kubernetes-upgrade-192000: state=Stopped err=<nil>
	W0524 12:32:25.700810    4138 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:32:25.708314    4138 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-192000" ...
	I0524 12:32:25.712390    4138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c5:a6:a1:00:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:25.714175    4138 main.go:141] libmachine: STDOUT: 
	I0524 12:32:25.714190    4138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:32:25.714217    4138 fix.go:57] fixHost completed within 13.528667ms
	I0524 12:32:25.714222    4138 start.go:83] releasing machines lock for "kubernetes-upgrade-192000", held for 13.545417ms
	W0524 12:32:25.714229    4138 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:32:25.714306    4138 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:32:25.714310    4138 start.go:702] Will try again in 5 seconds ...
	I0524 12:32:30.714771    4138 start.go:364] acquiring machines lock for kubernetes-upgrade-192000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:32:30.715072    4138 start.go:368] acquired machines lock for "kubernetes-upgrade-192000" in 242.167µs
	I0524 12:32:30.715191    4138 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:32:30.715210    4138 fix.go:55] fixHost starting: 
	I0524 12:32:30.715934    4138 fix.go:103] recreateIfNeeded on kubernetes-upgrade-192000: state=Stopped err=<nil>
	W0524 12:32:30.715960    4138 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:32:30.722814    4138 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-192000" ...
	I0524 12:32:30.725885    4138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c5:a6:a1:00:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubernetes-upgrade-192000/disk.qcow2
	I0524 12:32:30.734415    4138 main.go:141] libmachine: STDOUT: 
	I0524 12:32:30.734484    4138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:32:30.734567    4138 fix.go:57] fixHost completed within 19.357541ms
	I0524 12:32:30.734594    4138 start.go:83] releasing machines lock for "kubernetes-upgrade-192000", held for 19.501208ms
	W0524 12:32:30.734948    4138 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-192000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-192000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:32:30.743765    4138 out.go:177] 
	W0524 12:32:30.748070    4138 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:32:30.748115    4138 out.go:239] * 
	* 
	W0524 12:32:30.750500    4138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:32:30.755780    4138 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-192000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-192000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-192000 version --output=json: exit status 1 (63.186791ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-192000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-05-24 12:32:30.833756 -0700 PDT m=+3407.704983084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-192000 -n kubernetes-upgrade-192000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-192000 -n kubernetes-upgrade-192000: exit status 7 (31.891917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-192000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-192000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-192000
--- FAIL: TestKubernetesUpgrade (15.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16573
- KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1086782885/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16573
- KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1976932748/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (136.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (136.96s)

                                                
                                    
x
+
TestPause/serial/Start (9.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-541000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0524 12:32:38.059544    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/ingress-addon-legacy-607000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-541000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.832277959s)

                                                
                                                
-- stdout --
	* [pause-541000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-541000 in cluster pause-541000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-541000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-541000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-541000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-541000 -n pause-541000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-541000 -n pause-541000: exit status 7 (67.031958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-541000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 
E0524 12:32:50.568766    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 : exit status 80 (9.86787775s)

                                                
                                                
-- stdout --
	* [NoKubernetes-218000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-218000 in cluster NoKubernetes-218000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000: exit status 7 (72.1925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 : exit status 80 (5.24428s)

                                                
                                                
-- stdout --
	* [NoKubernetes-218000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-218000
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000: exit status 7 (72.873208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2315475s)

                                                
                                                
-- stdout --
	* [NoKubernetes-218000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-218000
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000: exit status 7 (62.383459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 : exit status 80 (5.247626667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-218000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-218000
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-218000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-218000 -n NoKubernetes-218000: exit status 7 (67.191333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.697309208s)

                                                
                                                
-- stdout --
	* [auto-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-220000 in cluster auto-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:33:07.387716    4255 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:33:07.387842    4255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:07.387845    4255 out.go:309] Setting ErrFile to fd 2...
	I0524 12:33:07.387848    4255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:07.387911    4255 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:33:07.388956    4255 out.go:303] Setting JSON to false
	I0524 12:33:07.404463    4255 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3758,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:33:07.404537    4255 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:33:07.409143    4255 out.go:177] * [auto-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:33:07.417243    4255 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:33:07.417267    4255 notify.go:220] Checking for updates...
	I0524 12:33:07.425165    4255 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:33:07.428171    4255 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:33:07.432091    4255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:33:07.435118    4255 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:33:07.438117    4255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:33:07.441476    4255 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:33:07.441498    4255 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:33:07.445111    4255 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:33:07.452100    4255 start.go:295] selected driver: qemu2
	I0524 12:33:07.452106    4255 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:33:07.452112    4255 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:33:07.454070    4255 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:33:07.458096    4255 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:33:07.461253    4255 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:33:07.461277    4255 cni.go:84] Creating CNI manager for ""
	I0524 12:33:07.461285    4255 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:33:07.461290    4255 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:33:07.461296    4255 start_flags.go:319] config:
	{Name:auto-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:auto-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:33:07.461375    4255 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:33:07.470102    4255 out.go:177] * Starting control plane node auto-220000 in cluster auto-220000
	I0524 12:33:07.474103    4255 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:33:07.474132    4255 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:33:07.474144    4255 cache.go:57] Caching tarball of preloaded images
	I0524 12:33:07.474199    4255 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:33:07.474205    4255 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:33:07.474263    4255 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/auto-220000/config.json ...
	I0524 12:33:07.474275    4255 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/auto-220000/config.json: {Name:mka73b20f32151b2c969d637c774d1d535e229c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:33:07.474483    4255 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:33:07.474497    4255 start.go:364] acquiring machines lock for auto-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:07.474527    4255 start.go:368] acquired machines lock for "auto-220000" in 24.75µs
	I0524 12:33:07.474542    4255 start.go:93] Provisioning new machine with config: &{Name:auto-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:07.474566    4255 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:07.479208    4255 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:07.495321    4255 start.go:159] libmachine.API.Create for "auto-220000" (driver="qemu2")
	I0524 12:33:07.495343    4255 client.go:168] LocalClient.Create starting
	I0524 12:33:07.495410    4255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:07.495429    4255 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:07.495444    4255 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:07.495496    4255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:07.495511    4255 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:07.495517    4255 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:07.495837    4255 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:07.610352    4255 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:07.725549    4255 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:07.725559    4255 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:07.725710    4255 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:07.734374    4255 main.go:141] libmachine: STDOUT: 
	I0524 12:33:07.734386    4255 main.go:141] libmachine: STDERR: 
	I0524 12:33:07.734441    4255 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2 +20000M
	I0524 12:33:07.741554    4255 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:07.741565    4255 main.go:141] libmachine: STDERR: 
	I0524 12:33:07.741581    4255 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:07.741593    4255 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:07.741629    4255 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:05:23:04:57:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:07.743113    4255 main.go:141] libmachine: STDOUT: 
	I0524 12:33:07.743124    4255 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:07.743140    4255 client.go:171] LocalClient.Create took 247.795792ms
	I0524 12:33:09.745273    4255 start.go:128] duration metric: createHost completed in 2.270713083s
	I0524 12:33:09.745334    4255 start.go:83] releasing machines lock for "auto-220000", held for 2.27082075s
	W0524 12:33:09.745389    4255 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:09.752832    4255 out.go:177] * Deleting "auto-220000" in qemu2 ...
	W0524 12:33:09.772698    4255 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:09.772724    4255 start.go:702] Will try again in 5 seconds ...
	I0524 12:33:14.774603    4255 start.go:364] acquiring machines lock for auto-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:14.775140    4255 start.go:368] acquired machines lock for "auto-220000" in 412.917µs
	I0524 12:33:14.775298    4255 start.go:93] Provisioning new machine with config: &{Name:auto-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:14.775636    4255 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:14.784554    4255 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:14.833245    4255 start.go:159] libmachine.API.Create for "auto-220000" (driver="qemu2")
	I0524 12:33:14.833299    4255 client.go:168] LocalClient.Create starting
	I0524 12:33:14.833417    4255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:14.833456    4255 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:14.833475    4255 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:14.833552    4255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:14.833581    4255 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:14.833593    4255 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:14.834119    4255 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:14.957275    4255 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:14.998632    4255 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:14.998638    4255 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:14.998795    4255 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:15.007410    4255 main.go:141] libmachine: STDOUT: 
	I0524 12:33:15.007424    4255 main.go:141] libmachine: STDERR: 
	I0524 12:33:15.007479    4255 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2 +20000M
	I0524 12:33:15.014644    4255 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:15.014656    4255 main.go:141] libmachine: STDERR: 
	I0524 12:33:15.014667    4255 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:15.014675    4255 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:15.014713    4255 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:6c:d5:a5:1f:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/auto-220000/disk.qcow2
	I0524 12:33:15.016253    4255 main.go:141] libmachine: STDOUT: 
	I0524 12:33:15.016267    4255 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:15.016291    4255 client.go:171] LocalClient.Create took 182.982208ms
	I0524 12:33:17.018427    4255 start.go:128] duration metric: createHost completed in 2.242783375s
	I0524 12:33:17.018493    4255 start.go:83] releasing machines lock for "auto-220000", held for 2.243353084s
	W0524 12:33:17.019121    4255 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:17.026340    4255 out.go:177] 
	W0524 12:33:17.032403    4255 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:33:17.032427    4255 out.go:239] * 
	* 
	W0524 12:33:17.035151    4255 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:33:17.045285    4255 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.723341833s)

                                                
                                                
-- stdout --
	* [calico-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-220000 in cluster calico-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:33:19.153797    4364 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:33:19.153947    4364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:19.153950    4364 out.go:309] Setting ErrFile to fd 2...
	I0524 12:33:19.153963    4364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:19.154039    4364 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:33:19.155074    4364 out.go:303] Setting JSON to false
	I0524 12:33:19.170138    4364 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3770,"bootTime":1684953029,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:33:19.170225    4364 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:33:19.174036    4364 out.go:177] * [calico-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:33:19.185897    4364 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:33:19.182011    4364 notify.go:220] Checking for updates...
	I0524 12:33:19.193886    4364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:33:19.195378    4364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:33:19.200049    4364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:33:19.202949    4364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:33:19.205870    4364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:33:19.209172    4364 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:33:19.209190    4364 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:33:19.213813    4364 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:33:19.220839    4364 start.go:295] selected driver: qemu2
	I0524 12:33:19.220846    4364 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:33:19.220857    4364 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:33:19.222864    4364 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:33:19.226895    4364 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:33:19.229883    4364 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:33:19.229901    4364 cni.go:84] Creating CNI manager for "calico"
	I0524 12:33:19.229905    4364 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0524 12:33:19.229912    4364 start_flags.go:319] config:
	{Name:calico-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:calico-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:33:19.229989    4364 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:33:19.238881    4364 out.go:177] * Starting control plane node calico-220000 in cluster calico-220000
	I0524 12:33:19.242844    4364 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:33:19.242881    4364 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:33:19.242895    4364 cache.go:57] Caching tarball of preloaded images
	I0524 12:33:19.242962    4364 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:33:19.242967    4364 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:33:19.243022    4364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/calico-220000/config.json ...
	I0524 12:33:19.243034    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/calico-220000/config.json: {Name:mkf793907e296e669f227c8882be9c6c886608b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:33:19.243252    4364 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:33:19.243268    4364 start.go:364] acquiring machines lock for calico-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:19.243297    4364 start.go:368] acquired machines lock for "calico-220000" in 24.875µs
	I0524 12:33:19.243311    4364 start.go:93] Provisioning new machine with config: &{Name:calico-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:19.243338    4364 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:19.250908    4364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:19.267325    4364 start.go:159] libmachine.API.Create for "calico-220000" (driver="qemu2")
	I0524 12:33:19.267345    4364 client.go:168] LocalClient.Create starting
	I0524 12:33:19.267404    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:19.267426    4364 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:19.267436    4364 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:19.267465    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:19.267480    4364 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:19.267486    4364 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:19.267820    4364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:19.388720    4364 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:19.505145    4364 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:19.505154    4364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:19.505309    4364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:19.513890    4364 main.go:141] libmachine: STDOUT: 
	I0524 12:33:19.513901    4364 main.go:141] libmachine: STDERR: 
	I0524 12:33:19.513955    4364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2 +20000M
	I0524 12:33:19.520974    4364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:19.520990    4364 main.go:141] libmachine: STDERR: 
	I0524 12:33:19.521012    4364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:19.521017    4364 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:19.521059    4364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:75:12:39:7b:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:19.522551    4364 main.go:141] libmachine: STDOUT: 
	I0524 12:33:19.522562    4364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:19.522589    4364 client.go:171] LocalClient.Create took 255.234875ms
	I0524 12:33:21.524722    4364 start.go:128] duration metric: createHost completed in 2.28138925s
	I0524 12:33:21.524786    4364 start.go:83] releasing machines lock for "calico-220000", held for 2.281502125s
	W0524 12:33:21.524877    4364 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:21.533398    4364 out.go:177] * Deleting "calico-220000" in qemu2 ...
	W0524 12:33:21.555255    4364 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:21.555282    4364 start.go:702] Will try again in 5 seconds ...
	I0524 12:33:26.557490    4364 start.go:364] acquiring machines lock for calico-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:26.558153    4364 start.go:368] acquired machines lock for "calico-220000" in 524.542µs
	I0524 12:33:26.558261    4364 start.go:93] Provisioning new machine with config: &{Name:calico-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:26.558549    4364 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:26.566470    4364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:26.613865    4364 start.go:159] libmachine.API.Create for "calico-220000" (driver="qemu2")
	I0524 12:33:26.613913    4364 client.go:168] LocalClient.Create starting
	I0524 12:33:26.614053    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:26.614092    4364 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:26.614109    4364 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:26.614184    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:26.614213    4364 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:26.614227    4364 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:26.614757    4364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:26.746726    4364 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:26.790068    4364 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:26.790073    4364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:26.790212    4364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:26.798657    4364 main.go:141] libmachine: STDOUT: 
	I0524 12:33:26.798672    4364 main.go:141] libmachine: STDERR: 
	I0524 12:33:26.798737    4364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2 +20000M
	I0524 12:33:26.805865    4364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:26.805878    4364 main.go:141] libmachine: STDERR: 
	I0524 12:33:26.805889    4364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:26.805894    4364 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:26.805936    4364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:91:9c:94:59:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/calico-220000/disk.qcow2
	I0524 12:33:26.807487    4364 main.go:141] libmachine: STDOUT: 
	I0524 12:33:26.807500    4364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:26.807511    4364 client.go:171] LocalClient.Create took 193.593584ms
	I0524 12:33:28.809677    4364 start.go:128] duration metric: createHost completed in 2.251120834s
	I0524 12:33:28.809731    4364 start.go:83] releasing machines lock for "calico-220000", held for 2.251569666s
	W0524 12:33:28.810430    4364 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:28.819099    4364 out.go:177] 
	W0524 12:33:28.824209    4364 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:33:28.824234    4364 out.go:239] * 
	* 
	W0524 12:33:28.826540    4364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:33:28.835991    4364 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.818636042s)

                                                
                                                
-- stdout --
	* [custom-flannel-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-220000 in cluster custom-flannel-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:33:31.147522    4482 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:33:31.147659    4482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:31.147662    4482 out.go:309] Setting ErrFile to fd 2...
	I0524 12:33:31.147665    4482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:31.147748    4482 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:33:31.148823    4482 out.go:303] Setting JSON to false
	I0524 12:33:31.163981    4482 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3782,"bootTime":1684953029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:33:31.164044    4482 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:33:31.167898    4482 out.go:177] * [custom-flannel-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:33:31.175897    4482 notify.go:220] Checking for updates...
	I0524 12:33:31.179860    4482 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:33:31.183839    4482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:33:31.186912    4482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:33:31.189867    4482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:33:31.192870    4482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:33:31.195866    4482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:33:31.199192    4482 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:33:31.199212    4482 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:33:31.203812    4482 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:33:31.210864    4482 start.go:295] selected driver: qemu2
	I0524 12:33:31.210870    4482 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:33:31.210877    4482 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:33:31.212773    4482 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:33:31.216767    4482 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:33:31.219898    4482 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:33:31.219918    4482 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0524 12:33:31.219938    4482 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0524 12:33:31.219946    4482 start_flags.go:319] config:
	{Name:custom-flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0524 12:33:31.220026    4482 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:33:31.225835    4482 out.go:177] * Starting control plane node custom-flannel-220000 in cluster custom-flannel-220000
	I0524 12:33:31.229882    4482 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:33:31.229904    4482 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:33:31.229917    4482 cache.go:57] Caching tarball of preloaded images
	I0524 12:33:31.229985    4482 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:33:31.229991    4482 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:33:31.230062    4482 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/custom-flannel-220000/config.json ...
	I0524 12:33:31.230075    4482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/custom-flannel-220000/config.json: {Name:mkc6dfe35a109fcbc4188ad5ea8eee22fe3d2fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:33:31.230283    4482 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:33:31.230299    4482 start.go:364] acquiring machines lock for custom-flannel-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:31.230329    4482 start.go:368] acquired machines lock for "custom-flannel-220000" in 25.416µs
	I0524 12:33:31.230343    4482 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:31.230369    4482 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:31.238842    4482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:31.255613    4482 start.go:159] libmachine.API.Create for "custom-flannel-220000" (driver="qemu2")
	I0524 12:33:31.255647    4482 client.go:168] LocalClient.Create starting
	I0524 12:33:31.255719    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:31.255747    4482 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:31.255761    4482 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:31.255810    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:31.255825    4482 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:31.255834    4482 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:31.256173    4482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:31.370996    4482 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:31.580528    4482 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:31.580536    4482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:31.580732    4482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:31.589629    4482 main.go:141] libmachine: STDOUT: 
	I0524 12:33:31.589641    4482 main.go:141] libmachine: STDERR: 
	I0524 12:33:31.589712    4482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2 +20000M
	I0524 12:33:31.596934    4482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:31.596948    4482 main.go:141] libmachine: STDERR: 
	I0524 12:33:31.596969    4482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:31.596982    4482 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:31.597025    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2f:6f:24:81:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:31.598581    4482 main.go:141] libmachine: STDOUT: 
	I0524 12:33:31.598593    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:31.598609    4482 client.go:171] LocalClient.Create took 342.959292ms
	I0524 12:33:33.600809    4482 start.go:128] duration metric: createHost completed in 2.37042875s
	I0524 12:33:33.600900    4482 start.go:83] releasing machines lock for "custom-flannel-220000", held for 2.370581667s
	W0524 12:33:33.601271    4482 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:33.608885    4482 out.go:177] * Deleting "custom-flannel-220000" in qemu2 ...
	W0524 12:33:33.627877    4482 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:33.627907    4482 start.go:702] Will try again in 5 seconds ...
	I0524 12:33:38.630087    4482 start.go:364] acquiring machines lock for custom-flannel-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:38.630795    4482 start.go:368] acquired machines lock for "custom-flannel-220000" in 592.584µs
	I0524 12:33:38.630913    4482 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:38.631251    4482 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:38.640908    4482 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:38.689587    4482 start.go:159] libmachine.API.Create for "custom-flannel-220000" (driver="qemu2")
	I0524 12:33:38.689622    4482 client.go:168] LocalClient.Create starting
	I0524 12:33:38.689748    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:38.689793    4482 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:38.689810    4482 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:38.689887    4482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:38.689914    4482 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:38.689931    4482 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:38.690434    4482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:38.818495    4482 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:38.880686    4482 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:38.880693    4482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:38.880848    4482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:38.889235    4482 main.go:141] libmachine: STDOUT: 
	I0524 12:33:38.889248    4482 main.go:141] libmachine: STDERR: 
	I0524 12:33:38.889301    4482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2 +20000M
	I0524 12:33:38.896479    4482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:38.896502    4482 main.go:141] libmachine: STDERR: 
	I0524 12:33:38.896523    4482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:38.896531    4482 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:38.896567    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:f8:b5:bd:34:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/custom-flannel-220000/disk.qcow2
	I0524 12:33:38.898097    4482 main.go:141] libmachine: STDOUT: 
	I0524 12:33:38.898108    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:38.898121    4482 client.go:171] LocalClient.Create took 208.492708ms
	I0524 12:33:40.900257    4482 start.go:128] duration metric: createHost completed in 2.26900425s
	I0524 12:33:40.900313    4482 start.go:83] releasing machines lock for "custom-flannel-220000", held for 2.269516459s
	W0524 12:33:40.900973    4482 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:40.911639    4482 out.go:177] 
	W0524 12:33:40.914806    4482 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:33:40.914852    4482 out.go:239] * 
	* 
	W0524 12:33:40.917445    4482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:33:40.925619    4482 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p false-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.158974667s)

                                                
                                                
-- stdout --
	* [false-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-220000 in cluster false-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:33:43.250797    4599 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:33:43.250932    4599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:43.250935    4599 out.go:309] Setting ErrFile to fd 2...
	I0524 12:33:43.250938    4599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:43.251006    4599 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:33:43.252063    4599 out.go:303] Setting JSON to false
	I0524 12:33:43.267371    4599 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3794,"bootTime":1684953029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:33:43.267431    4599 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:33:43.272546    4599 out.go:177] * [false-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:33:43.279517    4599 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:33:43.279548    4599 notify.go:220] Checking for updates...
	I0524 12:33:43.285483    4599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:33:43.288442    4599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:33:43.291495    4599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:33:43.294519    4599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:33:43.297500    4599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:33:43.300826    4599 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:33:43.300844    4599 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:33:43.305514    4599 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:33:43.312510    4599 start.go:295] selected driver: qemu2
	I0524 12:33:43.312517    4599 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:33:43.312525    4599 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:33:43.314419    4599 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:33:43.317505    4599 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:33:43.321561    4599 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:33:43.321579    4599 cni.go:84] Creating CNI manager for "false"
	I0524 12:33:43.321583    4599 start_flags.go:319] config:
	{Name:false-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:33:43.321656    4599 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:33:43.330364    4599 out.go:177] * Starting control plane node false-220000 in cluster false-220000
	I0524 12:33:43.334515    4599 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:33:43.334536    4599 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:33:43.334548    4599 cache.go:57] Caching tarball of preloaded images
	I0524 12:33:43.334611    4599 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:33:43.334618    4599 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:33:43.334691    4599 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/false-220000/config.json ...
	I0524 12:33:43.334707    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/false-220000/config.json: {Name:mk1d2db6ce98f3518a9846ad5aa8c1a1428bb4ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:33:43.334904    4599 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:33:43.334917    4599 start.go:364] acquiring machines lock for false-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:43.334947    4599 start.go:368] acquired machines lock for "false-220000" in 24.375µs
	I0524 12:33:43.334959    4599 start.go:93] Provisioning new machine with config: &{Name:false-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:43.334994    4599 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:43.341475    4599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:43.358790    4599 start.go:159] libmachine.API.Create for "false-220000" (driver="qemu2")
	I0524 12:33:43.358811    4599 client.go:168] LocalClient.Create starting
	I0524 12:33:43.358872    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:43.358892    4599 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:43.358904    4599 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:43.358942    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:43.358957    4599 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:43.358967    4599 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:43.359289    4599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:43.472928    4599 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:43.679455    4599 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:43.679463    4599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:43.679642    4599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:43.688833    4599 main.go:141] libmachine: STDOUT: 
	I0524 12:33:43.688859    4599 main.go:141] libmachine: STDERR: 
	I0524 12:33:43.688924    4599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2 +20000M
	I0524 12:33:43.696142    4599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:43.696155    4599 main.go:141] libmachine: STDERR: 
	I0524 12:33:43.696183    4599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:43.696191    4599 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:43.696244    4599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a3:5d:cb:80:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:43.697803    4599 main.go:141] libmachine: STDOUT: 
	I0524 12:33:43.697814    4599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:43.697836    4599 client.go:171] LocalClient.Create took 339.023917ms
	I0524 12:33:45.700192    4599 start.go:128] duration metric: createHost completed in 2.3652055s
	I0524 12:33:45.700234    4599 start.go:83] releasing machines lock for "false-220000", held for 2.365303s
	W0524 12:33:45.700286    4599 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:45.708827    4599 out.go:177] * Deleting "false-220000" in qemu2 ...
	W0524 12:33:45.729190    4599 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:45.729217    4599 start.go:702] Will try again in 5 seconds ...
	I0524 12:33:50.731385    4599 start.go:364] acquiring machines lock for false-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:50.731909    4599 start.go:368] acquired machines lock for "false-220000" in 430µs
	I0524 12:33:50.732013    4599 start.go:93] Provisioning new machine with config: &{Name:false-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:50.732325    4599 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:50.742212    4599 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:50.791183    4599 start.go:159] libmachine.API.Create for "false-220000" (driver="qemu2")
	I0524 12:33:50.791221    4599 client.go:168] LocalClient.Create starting
	I0524 12:33:50.791353    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:50.791395    4599 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:50.791414    4599 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:50.791492    4599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:50.791541    4599 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:50.791558    4599 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:50.792142    4599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:50.919440    4599 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:51.323987    4599 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:51.324003    4599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:51.324208    4599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:51.333622    4599 main.go:141] libmachine: STDOUT: 
	I0524 12:33:51.333641    4599 main.go:141] libmachine: STDERR: 
	I0524 12:33:51.333710    4599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2 +20000M
	I0524 12:33:51.341047    4599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:51.341060    4599 main.go:141] libmachine: STDERR: 
	I0524 12:33:51.341073    4599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:51.341083    4599 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:51.341133    4599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:3f:38:26:dc:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/false-220000/disk.qcow2
	I0524 12:33:51.342688    4599 main.go:141] libmachine: STDOUT: 
	I0524 12:33:51.342701    4599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:51.342714    4599 client.go:171] LocalClient.Create took 551.494334ms
	I0524 12:33:53.344914    4599 start.go:128] duration metric: createHost completed in 2.612574542s
	I0524 12:33:53.344973    4599 start.go:83] releasing machines lock for "false-220000", held for 2.613067417s
	W0524 12:33:53.345595    4599 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:53.354069    4599 out.go:177] 
	W0524 12:33:53.358216    4599 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:33:53.358240    4599 out.go:239] * 
	* 
	W0524 12:33:53.361060    4599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:33:53.369123    4599 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0524 12:34:01.544138    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.666395167s)

                                                
                                                
-- stdout --
	* [kindnet-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-220000 in cluster kindnet-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:33:55.531487    4711 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:33:55.531604    4711 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:55.531607    4711 out.go:309] Setting ErrFile to fd 2...
	I0524 12:33:55.531610    4711 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:33:55.531676    4711 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:33:55.532693    4711 out.go:303] Setting JSON to false
	I0524 12:33:55.548023    4711 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3806,"bootTime":1684953029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:33:55.548088    4711 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:33:55.552586    4711 out.go:177] * [kindnet-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:33:55.560771    4711 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:33:55.560771    4711 notify.go:220] Checking for updates...
	I0524 12:33:55.568628    4711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:33:55.571692    4711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:33:55.574606    4711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:33:55.577640    4711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:33:55.580744    4711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:33:55.584024    4711 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:33:55.584046    4711 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:33:55.588644    4711 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:33:55.595590    4711 start.go:295] selected driver: qemu2
	I0524 12:33:55.595596    4711 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:33:55.595602    4711 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:33:55.597541    4711 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:33:55.601643    4711 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:33:55.604786    4711 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:33:55.604805    4711 cni.go:84] Creating CNI manager for "kindnet"
	I0524 12:33:55.604808    4711 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0524 12:33:55.604822    4711 start_flags.go:319] config:
	{Name:kindnet-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:33:55.604898    4711 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:33:55.613685    4711 out.go:177] * Starting control plane node kindnet-220000 in cluster kindnet-220000
	I0524 12:33:55.616556    4711 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:33:55.616577    4711 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:33:55.616587    4711 cache.go:57] Caching tarball of preloaded images
	I0524 12:33:55.616642    4711 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:33:55.616647    4711 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:33:55.616700    4711 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kindnet-220000/config.json ...
	I0524 12:33:55.616713    4711 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kindnet-220000/config.json: {Name:mkba16d535a29ec90d0bc81142916f5356b22ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:33:55.616920    4711 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:33:55.616935    4711 start.go:364] acquiring machines lock for kindnet-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:33:55.616965    4711 start.go:368] acquired machines lock for "kindnet-220000" in 24.583µs
	I0524 12:33:55.616980    4711 start.go:93] Provisioning new machine with config: &{Name:kindnet-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:33:55.617004    4711 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:33:55.625624    4711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:33:55.643238    4711 start.go:159] libmachine.API.Create for "kindnet-220000" (driver="qemu2")
	I0524 12:33:55.643516    4711 client.go:168] LocalClient.Create starting
	I0524 12:33:55.643595    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:33:55.643620    4711 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:55.643645    4711 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:55.643697    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:33:55.643714    4711 main.go:141] libmachine: Decoding PEM data...
	I0524 12:33:55.643723    4711 main.go:141] libmachine: Parsing certificate...
	I0524 12:33:55.644188    4711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:33:55.759457    4711 main.go:141] libmachine: Creating SSH key...
	I0524 12:33:55.821913    4711 main.go:141] libmachine: Creating Disk image...
	I0524 12:33:55.821918    4711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:33:55.822080    4711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:33:55.830673    4711 main.go:141] libmachine: STDOUT: 
	I0524 12:33:55.830687    4711 main.go:141] libmachine: STDERR: 
	I0524 12:33:55.830739    4711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2 +20000M
	I0524 12:33:55.838057    4711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:33:55.838069    4711 main.go:141] libmachine: STDERR: 
	I0524 12:33:55.838091    4711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:33:55.838098    4711 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:33:55.838143    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:63:c9:fa:46:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:33:55.839593    4711 main.go:141] libmachine: STDOUT: 
	I0524 12:33:55.839605    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:33:55.839621    4711 client.go:171] LocalClient.Create took 196.097042ms
	I0524 12:33:57.841766    4711 start.go:128] duration metric: createHost completed in 2.224766834s
	I0524 12:33:57.841833    4711 start.go:83] releasing machines lock for "kindnet-220000", held for 2.224881083s
	W0524 12:33:57.841920    4711 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:57.850394    4711 out.go:177] * Deleting "kindnet-220000" in qemu2 ...
	W0524 12:33:57.873476    4711 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:33:57.873505    4711 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:02.875771    4711 start.go:364] acquiring machines lock for kindnet-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:02.876266    4711 start.go:368] acquired machines lock for "kindnet-220000" in 393µs
	I0524 12:34:02.876364    4711 start.go:93] Provisioning new machine with config: &{Name:kindnet-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:02.876610    4711 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:02.884400    4711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:02.933475    4711 start.go:159] libmachine.API.Create for "kindnet-220000" (driver="qemu2")
	I0524 12:34:02.933511    4711 client.go:168] LocalClient.Create starting
	I0524 12:34:02.933653    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:02.933690    4711 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:02.933710    4711 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:02.933791    4711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:02.933819    4711 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:02.933836    4711 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:02.934325    4711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:03.058203    4711 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:03.112781    4711 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:03.112787    4711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:03.112928    4711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:34:03.121100    4711 main.go:141] libmachine: STDOUT: 
	I0524 12:34:03.121116    4711 main.go:141] libmachine: STDERR: 
	I0524 12:34:03.121174    4711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2 +20000M
	I0524 12:34:03.128129    4711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:03.128142    4711 main.go:141] libmachine: STDERR: 
	I0524 12:34:03.128157    4711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:34:03.128162    4711 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:03.128206    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:b3:13:55:d2:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kindnet-220000/disk.qcow2
	I0524 12:34:03.129634    4711 main.go:141] libmachine: STDOUT: 
	I0524 12:34:03.129648    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:03.129660    4711 client.go:171] LocalClient.Create took 196.1465ms
	I0524 12:34:05.131802    4711 start.go:128] duration metric: createHost completed in 2.25518875s
	I0524 12:34:05.131863    4711 start.go:83] releasing machines lock for "kindnet-220000", held for 2.25559625s
	W0524 12:34:05.132449    4711 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:05.141074    4711 out.go:177] 
	W0524 12:34:05.144982    4711 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:34:05.145006    4711 out.go:239] * 
	* 
	W0524 12:34:05.147861    4711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:34:05.156968    4711 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0524 12:34:13.638847    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.751532208s)

                                                
                                                
-- stdout --
	* [flannel-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-220000 in cluster flannel-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:07.420519    4826 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:07.420657    4826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:07.420660    4826 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:07.420662    4826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:07.420725    4826 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:07.421770    4826 out.go:303] Setting JSON to false
	I0524 12:34:07.436887    4826 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3818,"bootTime":1684953029,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:07.436972    4826 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:07.442210    4826 out.go:177] * [flannel-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:07.446212    4826 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:07.446228    4826 notify.go:220] Checking for updates...
	I0524 12:34:07.454142    4826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:07.457273    4826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:07.458549    4826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:07.461176    4826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:07.464219    4826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:07.467528    4826 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:07.467548    4826 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:07.472181    4826 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:07.479217    4826 start.go:295] selected driver: qemu2
	I0524 12:34:07.479223    4826 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:07.479230    4826 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:07.481058    4826 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:07.485141    4826 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:07.488283    4826 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:07.488303    4826 cni.go:84] Creating CNI manager for "flannel"
	I0524 12:34:07.488308    4826 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0524 12:34:07.488314    4826 start_flags.go:319] config:
	{Name:flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:07.488401    4826 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:07.496198    4826 out.go:177] * Starting control plane node flannel-220000 in cluster flannel-220000
	I0524 12:34:07.500146    4826 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:07.500165    4826 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:34:07.500175    4826 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:07.500232    4826 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:07.500238    4826 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:34:07.500292    4826 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/flannel-220000/config.json ...
	I0524 12:34:07.500303    4826 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/flannel-220000/config.json: {Name:mkbd251704dee665b202bd7a576fae290edcbd6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:07.500501    4826 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:07.500515    4826 start.go:364] acquiring machines lock for flannel-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:07.500548    4826 start.go:368] acquired machines lock for "flannel-220000" in 28.458µs
	I0524 12:34:07.500561    4826 start.go:93] Provisioning new machine with config: &{Name:flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:07.500590    4826 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:07.509245    4826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:07.525599    4826 start.go:159] libmachine.API.Create for "flannel-220000" (driver="qemu2")
	I0524 12:34:07.525616    4826 client.go:168] LocalClient.Create starting
	I0524 12:34:07.525669    4826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:07.525692    4826 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:07.525700    4826 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:07.525722    4826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:07.525741    4826 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:07.525746    4826 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:07.526311    4826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:07.636033    4826 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:07.816565    4826 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:07.816572    4826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:07.816731    4826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:07.825795    4826 main.go:141] libmachine: STDOUT: 
	I0524 12:34:07.825811    4826 main.go:141] libmachine: STDERR: 
	I0524 12:34:07.825903    4826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2 +20000M
	I0524 12:34:07.833111    4826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:07.833126    4826 main.go:141] libmachine: STDERR: 
	I0524 12:34:07.833174    4826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:07.833188    4826 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:07.833229    4826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e5:c7:10:9a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:07.834779    4826 main.go:141] libmachine: STDOUT: 
	I0524 12:34:07.834795    4826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:07.834817    4826 client.go:171] LocalClient.Create took 309.19975ms
	I0524 12:34:09.836960    4826 start.go:128] duration metric: createHost completed in 2.336372417s
	I0524 12:34:09.837021    4826 start.go:83] releasing machines lock for "flannel-220000", held for 2.336486917s
	W0524 12:34:09.837095    4826 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:09.844650    4826 out.go:177] * Deleting "flannel-220000" in qemu2 ...
	W0524 12:34:09.863651    4826 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:09.863677    4826 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:14.865984    4826 start.go:364] acquiring machines lock for flannel-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:14.866584    4826 start.go:368] acquired machines lock for "flannel-220000" in 496.916µs
	I0524 12:34:14.866686    4826 start.go:93] Provisioning new machine with config: &{Name:flannel-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:14.866968    4826 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:14.875792    4826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:14.923703    4826 start.go:159] libmachine.API.Create for "flannel-220000" (driver="qemu2")
	I0524 12:34:14.923739    4826 client.go:168] LocalClient.Create starting
	I0524 12:34:14.923863    4826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:14.923910    4826 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:14.923927    4826 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:14.924013    4826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:14.924041    4826 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:14.924055    4826 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:14.924561    4826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:15.047573    4826 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:15.084829    4826 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:15.084834    4826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:15.084957    4826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:15.093463    4826 main.go:141] libmachine: STDOUT: 
	I0524 12:34:15.093484    4826 main.go:141] libmachine: STDERR: 
	I0524 12:34:15.093537    4826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2 +20000M
	I0524 12:34:15.100665    4826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:15.100682    4826 main.go:141] libmachine: STDERR: 
	I0524 12:34:15.100697    4826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:15.100704    4826 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:15.100749    4826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:10:fd:13:cc:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/flannel-220000/disk.qcow2
	I0524 12:34:15.102292    4826 main.go:141] libmachine: STDOUT: 
	I0524 12:34:15.102305    4826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:15.102317    4826 client.go:171] LocalClient.Create took 178.575833ms
	I0524 12:34:17.104522    4826 start.go:128] duration metric: createHost completed in 2.237490959s
	I0524 12:34:17.104567    4826 start.go:83] releasing machines lock for "flannel-220000", held for 2.23798475s
	W0524 12:34:17.105116    4826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:17.116293    4826 out.go:177] 
	W0524 12:34:17.120357    4826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:34:17.120383    4826 out.go:239] * 
	* 
	W0524 12:34:17.123346    4826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:34:17.132255    4826 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.597863916s)

                                                
                                                
-- stdout --
	* [enable-default-cni-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-220000 in cluster enable-default-cni-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:19.450574    4943 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:19.450707    4943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:19.450710    4943 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:19.450712    4943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:19.450779    4943 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:19.451825    4943 out.go:303] Setting JSON to false
	I0524 12:34:19.466993    4943 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3830,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:19.467060    4943 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:19.471101    4943 out.go:177] * [enable-default-cni-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:19.476501    4943 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:19.476528    4943 notify.go:220] Checking for updates...
	I0524 12:34:19.483038    4943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:19.486052    4943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:19.490003    4943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:19.493057    4943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:19.495926    4943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:19.499357    4943 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:19.499375    4943 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:19.503988    4943 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:19.511019    4943 start.go:295] selected driver: qemu2
	I0524 12:34:19.511025    4943 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:19.511030    4943 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:19.512882    4943 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:19.516035    4943 out.go:177] * Automatically selected the socket_vmnet network
	E0524 12:34:19.517467    4943 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0524 12:34:19.517478    4943 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:19.517495    4943 cni.go:84] Creating CNI manager for "bridge"
	I0524 12:34:19.517500    4943 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:34:19.517505    4943 start_flags.go:319] config:
	{Name:enable-default-cni-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0524 12:34:19.517588    4943 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:19.521989    4943 out.go:177] * Starting control plane node enable-default-cni-220000 in cluster enable-default-cni-220000
	I0524 12:34:19.530060    4943 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:19.530088    4943 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:34:19.530112    4943 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:19.530188    4943 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:19.530195    4943 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:34:19.530266    4943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/enable-default-cni-220000/config.json ...
	I0524 12:34:19.530279    4943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/enable-default-cni-220000/config.json: {Name:mkf4329c2fc1089869a51d2649fd1ef92ccc057c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:19.530482    4943 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:19.530497    4943 start.go:364] acquiring machines lock for enable-default-cni-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:19.530527    4943 start.go:368] acquired machines lock for "enable-default-cni-220000" in 24.958µs
	I0524 12:34:19.530541    4943 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:19.530573    4943 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:19.539000    4943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:19.555401    4943 start.go:159] libmachine.API.Create for "enable-default-cni-220000" (driver="qemu2")
	I0524 12:34:19.555419    4943 client.go:168] LocalClient.Create starting
	I0524 12:34:19.555493    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:19.555514    4943 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:19.555528    4943 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:19.555568    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:19.555583    4943 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:19.555595    4943 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:19.555950    4943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:19.668307    4943 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:19.699291    4943 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:19.699299    4943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:19.699448    4943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:19.707998    4943 main.go:141] libmachine: STDOUT: 
	I0524 12:34:19.708010    4943 main.go:141] libmachine: STDERR: 
	I0524 12:34:19.708079    4943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2 +20000M
	I0524 12:34:19.715387    4943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:19.715404    4943 main.go:141] libmachine: STDERR: 
	I0524 12:34:19.715420    4943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:19.715426    4943 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:19.715457    4943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:4a:f1:3a:8c:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:19.717016    4943 main.go:141] libmachine: STDOUT: 
	I0524 12:34:19.717028    4943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:19.717047    4943 client.go:171] LocalClient.Create took 161.624625ms
	I0524 12:34:21.719189    4943 start.go:128] duration metric: createHost completed in 2.18861975s
	I0524 12:34:21.719263    4943 start.go:83] releasing machines lock for "enable-default-cni-220000", held for 2.188748875s
	W0524 12:34:21.719345    4943 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:21.727799    4943 out.go:177] * Deleting "enable-default-cni-220000" in qemu2 ...
	W0524 12:34:21.750889    4943 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:21.750917    4943 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:26.753129    4943 start.go:364] acquiring machines lock for enable-default-cni-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:26.753575    4943 start.go:368] acquired machines lock for "enable-default-cni-220000" in 348.5µs
	I0524 12:34:26.753704    4943 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:26.754007    4943 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:26.759965    4943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:26.805030    4943 start.go:159] libmachine.API.Create for "enable-default-cni-220000" (driver="qemu2")
	I0524 12:34:26.805077    4943 client.go:168] LocalClient.Create starting
	I0524 12:34:26.805196    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:26.805244    4943 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:26.805263    4943 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:26.805332    4943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:26.805359    4943 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:26.805373    4943 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:26.805883    4943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:26.929566    4943 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:26.960510    4943 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:26.960515    4943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:26.960673    4943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:26.969151    4943 main.go:141] libmachine: STDOUT: 
	I0524 12:34:26.969170    4943 main.go:141] libmachine: STDERR: 
	I0524 12:34:26.969228    4943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2 +20000M
	I0524 12:34:26.976690    4943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:26.976703    4943 main.go:141] libmachine: STDERR: 
	I0524 12:34:26.976717    4943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:26.976732    4943 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:26.976770    4943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:0e:72:52:03:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/enable-default-cni-220000/disk.qcow2
	I0524 12:34:26.978335    4943 main.go:141] libmachine: STDOUT: 
	I0524 12:34:26.978353    4943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:26.978365    4943 client.go:171] LocalClient.Create took 173.282ms
	I0524 12:34:28.980567    4943 start.go:128] duration metric: createHost completed in 2.226557834s
	I0524 12:34:28.980630    4943 start.go:83] releasing machines lock for "enable-default-cni-220000", held for 2.227050208s
	W0524 12:34:28.981271    4943 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:28.991818    4943 out.go:177] 
	W0524 12:34:28.995953    4943 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:34:28.995983    4943 out.go:239] * 
	* 
	W0524 12:34:28.998701    4943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:34:29.007772    4943 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.703718417s)

                                                
                                                
-- stdout --
	* [bridge-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-220000 in cluster bridge-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:31.181544    5055 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:31.181685    5055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:31.181688    5055 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:31.181699    5055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:31.181768    5055 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:31.182837    5055 out.go:303] Setting JSON to false
	I0524 12:34:31.197942    5055 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3842,"bootTime":1684953029,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:31.198021    5055 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:31.203189    5055 out.go:177] * [bridge-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:31.211136    5055 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:31.211170    5055 notify.go:220] Checking for updates...
	I0524 12:34:31.218056    5055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:31.221174    5055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:31.224134    5055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:31.227174    5055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:31.230156    5055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:31.231808    5055 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:31.231828    5055 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:31.236043    5055 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:31.242938    5055 start.go:295] selected driver: qemu2
	I0524 12:34:31.242944    5055 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:31.242951    5055 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:31.244859    5055 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:31.248079    5055 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:31.251192    5055 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:31.251214    5055 cni.go:84] Creating CNI manager for "bridge"
	I0524 12:34:31.251218    5055 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:34:31.251225    5055 start_flags.go:319] config:
	{Name:bridge-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:bridge-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:31.251314    5055 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:31.260098    5055 out.go:177] * Starting control plane node bridge-220000 in cluster bridge-220000
	I0524 12:34:31.264064    5055 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:31.264090    5055 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:34:31.264103    5055 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:31.264167    5055 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:31.264173    5055 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:34:31.264230    5055 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/bridge-220000/config.json ...
	I0524 12:34:31.264241    5055 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/bridge-220000/config.json: {Name:mka790fe9e6dbc890b7bbb825588efe214eca7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:31.264435    5055 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:31.264451    5055 start.go:364] acquiring machines lock for bridge-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:31.264481    5055 start.go:368] acquired machines lock for "bridge-220000" in 25.042µs
	I0524 12:34:31.264495    5055 start.go:93] Provisioning new machine with config: &{Name:bridge-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:31.264535    5055 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:31.273115    5055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:31.290257    5055 start.go:159] libmachine.API.Create for "bridge-220000" (driver="qemu2")
	I0524 12:34:31.290280    5055 client.go:168] LocalClient.Create starting
	I0524 12:34:31.290340    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:31.290362    5055 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:31.290378    5055 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:31.290420    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:31.290435    5055 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:31.290443    5055 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:31.290799    5055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:31.401750    5055 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:31.478953    5055 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:31.478965    5055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:31.479117    5055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:31.487596    5055 main.go:141] libmachine: STDOUT: 
	I0524 12:34:31.487610    5055 main.go:141] libmachine: STDERR: 
	I0524 12:34:31.487661    5055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2 +20000M
	I0524 12:34:31.494764    5055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:31.494776    5055 main.go:141] libmachine: STDERR: 
	I0524 12:34:31.494788    5055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:31.494794    5055 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:31.494830    5055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:54:f8:5b:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:31.496336    5055 main.go:141] libmachine: STDOUT: 
	I0524 12:34:31.496349    5055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:31.496373    5055 client.go:171] LocalClient.Create took 206.088834ms
	I0524 12:34:33.498543    5055 start.go:128] duration metric: createHost completed in 2.2339975s
	I0524 12:34:33.498633    5055 start.go:83] releasing machines lock for "bridge-220000", held for 2.234164042s
	W0524 12:34:33.498693    5055 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:33.510249    5055 out.go:177] * Deleting "bridge-220000" in qemu2 ...
	W0524 12:34:33.528714    5055 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:33.528745    5055 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:38.530988    5055 start.go:364] acquiring machines lock for bridge-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:38.531392    5055 start.go:368] acquired machines lock for "bridge-220000" in 291.708µs
	I0524 12:34:38.531513    5055 start.go:93] Provisioning new machine with config: &{Name:bridge-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:38.531940    5055 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:38.540813    5055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:38.588567    5055 start.go:159] libmachine.API.Create for "bridge-220000" (driver="qemu2")
	I0524 12:34:38.588614    5055 client.go:168] LocalClient.Create starting
	I0524 12:34:38.588719    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:38.588765    5055 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:38.588787    5055 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:38.588860    5055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:38.588888    5055 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:38.588899    5055 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:38.589424    5055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:38.715403    5055 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:38.796453    5055 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:38.796459    5055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:38.796594    5055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:38.805223    5055 main.go:141] libmachine: STDOUT: 
	I0524 12:34:38.805238    5055 main.go:141] libmachine: STDERR: 
	I0524 12:34:38.805305    5055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2 +20000M
	I0524 12:34:38.812396    5055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:38.812408    5055 main.go:141] libmachine: STDERR: 
	I0524 12:34:38.812420    5055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:38.812426    5055 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:38.812463    5055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:59:f0:5f:32:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/bridge-220000/disk.qcow2
	I0524 12:34:38.813940    5055 main.go:141] libmachine: STDOUT: 
	I0524 12:34:38.813953    5055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:38.813966    5055 client.go:171] LocalClient.Create took 225.349458ms
	I0524 12:34:40.816129    5055 start.go:128] duration metric: createHost completed in 2.284188833s
	I0524 12:34:40.816182    5055 start.go:83] releasing machines lock for "bridge-220000", held for 2.284789833s
	W0524 12:34:40.816946    5055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:40.827623    5055 out.go:177] 
	W0524 12:34:40.831682    5055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:34:40.831704    5055 out.go:239] * 
	* 
	W0524 12:34:40.834381    5055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:34:40.844506    5055 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-220000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.810079667s)

                                                
                                                
-- stdout --
	* [kubenet-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-220000 in cluster kubenet-220000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:43.007812    5166 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:43.007942    5166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:43.007945    5166 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:43.007948    5166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:43.008022    5166 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:43.009043    5166 out.go:303] Setting JSON to false
	I0524 12:34:43.024174    5166 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3854,"bootTime":1684953029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:43.024248    5166 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:43.032301    5166 out.go:177] * [kubenet-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:43.036304    5166 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:43.036353    5166 notify.go:220] Checking for updates...
	I0524 12:34:43.044044    5166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:43.047258    5166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:43.050255    5166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:43.053301    5166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:43.056252    5166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:43.059598    5166 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:43.059618    5166 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:43.064291    5166 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:43.071219    5166 start.go:295] selected driver: qemu2
	I0524 12:34:43.071225    5166 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:43.071232    5166 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:43.073186    5166 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:43.076268    5166 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:43.080297    5166 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:43.080311    5166 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0524 12:34:43.080314    5166 start_flags.go:319] config:
	{Name:kubenet-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:43.080397    5166 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:43.089154    5166 out.go:177] * Starting control plane node kubenet-220000 in cluster kubenet-220000
	I0524 12:34:43.093207    5166 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:43.093234    5166 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:34:43.093253    5166 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:43.093314    5166 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:43.093323    5166 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:34:43.093380    5166 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubenet-220000/config.json ...
	I0524 12:34:43.093392    5166 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubenet-220000/config.json: {Name:mk69566e35f8bd96bef44540ef1731c0285469d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:43.093590    5166 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:43.093610    5166 start.go:364] acquiring machines lock for kubenet-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:43.093641    5166 start.go:368] acquired machines lock for "kubenet-220000" in 25.708µs
	I0524 12:34:43.093655    5166 start.go:93] Provisioning new machine with config: &{Name:kubenet-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:43.093682    5166 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:43.102218    5166 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:43.119124    5166 start.go:159] libmachine.API.Create for "kubenet-220000" (driver="qemu2")
	I0524 12:34:43.119138    5166 client.go:168] LocalClient.Create starting
	I0524 12:34:43.119193    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:43.119212    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:43.119222    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:43.119246    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:43.119261    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:43.119267    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:43.119588    5166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:43.254284    5166 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:43.408759    5166 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:43.408769    5166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:43.408935    5166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.417824    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:43.417841    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:43.417897    5166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2 +20000M
	I0524 12:34:43.425157    5166 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:43.425170    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:43.425189    5166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.425194    5166 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:43.425232    5166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:67:26:09:57:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.426755    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:43.426768    5166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:43.426789    5166 client.go:171] LocalClient.Create took 307.649417ms
	I0524 12:34:45.429094    5166 start.go:128] duration metric: createHost completed in 2.335351208s
	I0524 12:34:45.429200    5166 start.go:83] releasing machines lock for "kubenet-220000", held for 2.335573792s
	W0524 12:34:45.429255    5166 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:45.437024    5166 out.go:177] * Deleting "kubenet-220000" in qemu2 ...
	W0524 12:34:45.456350    5166 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:45.456373    5166 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:50.458372    5166 start.go:364] acquiring machines lock for kubenet-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:50.458457    5166 start.go:368] acquired machines lock for "kubenet-220000" in 61µs
	I0524 12:34:50.458468    5166 start.go:93] Provisioning new machine with config: &{Name:kubenet-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:50.458518    5166 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:50.465910    5166 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:50.482422    5166 start.go:159] libmachine.API.Create for "kubenet-220000" (driver="qemu2")
	I0524 12:34:50.482456    5166 client.go:168] LocalClient.Create starting
	I0524 12:34:50.482512    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:50.482535    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:50.482545    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:50.482590    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:50.482604    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:50.482611    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:50.482868    5166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:50.617404    5166 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:50.728805    5166 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:50.728813    5166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:50.728972    5166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:50.737883    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:50.737902    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:50.737972    5166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2 +20000M
	I0524 12:34:50.746050    5166 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:50.746067    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:50.746087    5166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:50.746094    5166 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:50.746141    5166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:12:35:77:d0:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:50.748199    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:50.748216    5166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:50.748228    5166 client.go:171] LocalClient.Create took 265.771708ms
	I0524 12:34:52.750390    5166 start.go:128] duration metric: createHost completed in 2.291865375s
	I0524 12:34:52.750481    5166 start.go:83] releasing machines lock for "kubenet-220000", held for 2.292036791s
	W0524 12:34:52.751050    5166 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:52.763712    5166 out.go:177] 
	W0524 12:34:52.767926    5166 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:34:52.767956    5166 out.go:239] * 
	* 
	W0524 12:34:52.770644    5166 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:34:52.780643    5166 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe: permission denied (6.00875ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe: permission denied (5.563583ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe start -p stopped-upgrade-633000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe: permission denied (5.410583ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2048642223.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-633000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-633000: exit status 85 (115.993792ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000 sudo cat                | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000 sudo cat                | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000 sudo cat                | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-220000                         | enable-default-cni-220000 | jenkins | v1.30.1 | 24 May 23 12:34 PDT | 24 May 23 12:34 PDT |
	| start   | -p bridge-220000 --memory=3072                       | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo crictl                         | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo crictl                         | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo find                           | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo ip a s                         | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	| ssh     | -p bridge-220000 sudo ip r s                         | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo iptables                       | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo docker                         | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo cat                            | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo                                | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo find                           | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-220000 sudo crio                           | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p bridge-220000                                     | bridge-220000             | jenkins | v1.30.1 | 24 May 23 12:34 PDT | 24 May 23 12:34 PDT |
	| start   | -p kubenet-220000                                    | kubenet-220000            | jenkins | v1.30.1 | 24 May 23 12:34 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 12:34:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 12:34:43.007812    5166 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:43.007942    5166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:43.007945    5166 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:43.007948    5166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:43.008022    5166 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:43.009043    5166 out.go:303] Setting JSON to false
	I0524 12:34:43.024174    5166 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3854,"bootTime":1684953029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:43.024248    5166 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:43.032301    5166 out.go:177] * [kubenet-220000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:43.036304    5166 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:43.036353    5166 notify.go:220] Checking for updates...
	I0524 12:34:43.044044    5166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:43.047258    5166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:43.050255    5166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:43.053301    5166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:43.056252    5166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:43.059598    5166 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:43.059618    5166 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:43.064291    5166 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:43.071219    5166 start.go:295] selected driver: qemu2
	I0524 12:34:43.071225    5166 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:43.071232    5166 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:43.073186    5166 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:43.076268    5166 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:43.080297    5166 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:43.080311    5166 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0524 12:34:43.080314    5166 start_flags.go:319] config:
	{Name:kubenet-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:43.080397    5166 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:43.089154    5166 out.go:177] * Starting control plane node kubenet-220000 in cluster kubenet-220000
	I0524 12:34:43.093207    5166 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:43.093234    5166 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:34:43.093253    5166 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:43.093314    5166 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:43.093323    5166 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:34:43.093380    5166 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubenet-220000/config.json ...
	I0524 12:34:43.093392    5166 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/kubenet-220000/config.json: {Name:mk69566e35f8bd96bef44540ef1731c0285469d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:43.093590    5166 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:43.093610    5166 start.go:364] acquiring machines lock for kubenet-220000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:43.093641    5166 start.go:368] acquired machines lock for "kubenet-220000" in 25.708µs
	I0524 12:34:43.093655    5166 start.go:93] Provisioning new machine with config: &{Name:kubenet-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:43.093682    5166 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:43.102218    5166 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0524 12:34:43.119124    5166 start.go:159] libmachine.API.Create for "kubenet-220000" (driver="qemu2")
	I0524 12:34:43.119138    5166 client.go:168] LocalClient.Create starting
	I0524 12:34:43.119193    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:43.119212    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:43.119222    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:43.119246    5166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:43.119261    5166 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:43.119267    5166 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:43.119588    5166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:43.254284    5166 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:43.408759    5166 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:43.408769    5166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:43.408935    5166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.417824    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:43.417841    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:43.417897    5166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2 +20000M
	I0524 12:34:43.425157    5166 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:43.425170    5166 main.go:141] libmachine: STDERR: 
	I0524 12:34:43.425189    5166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.425194    5166 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:43.425232    5166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:67:26:09:57:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/kubenet-220000/disk.qcow2
	I0524 12:34:43.426755    5166 main.go:141] libmachine: STDOUT: 
	I0524 12:34:43.426768    5166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:43.426789    5166 client.go:171] LocalClient.Create took 307.649417ms
	I0524 12:34:45.429094    5166 start.go:128] duration metric: createHost completed in 2.335351208s
	I0524 12:34:45.429200    5166 start.go:83] releasing machines lock for "kubenet-220000", held for 2.335573792s
	W0524 12:34:45.429255    5166 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:45.437024    5166 out.go:177] * Deleting "kubenet-220000" in qemu2 ...
	W0524 12:34:45.456350    5166 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:45.456373    5166 start.go:702] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-633000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-633000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.642164833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-787000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-787000 in cluster old-k8s-version-787000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-787000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:50.761315    5196 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:50.761424    5196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:50.761427    5196 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:50.761429    5196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:50.761499    5196 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:50.762504    5196 out.go:303] Setting JSON to false
	I0524 12:34:50.777812    5196 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3861,"bootTime":1684953029,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:50.777875    5196 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:50.784939    5196 out.go:177] * [old-k8s-version-787000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:50.788055    5196 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:50.788082    5196 notify.go:220] Checking for updates...
	I0524 12:34:50.795945    5196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:50.799054    5196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:50.801964    5196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:50.804997    5196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:50.808048    5196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:50.811329    5196 config.go:182] Loaded profile config "kubenet-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:50.811388    5196 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:50.811408    5196 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:50.815994    5196 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:50.822903    5196 start.go:295] selected driver: qemu2
	I0524 12:34:50.822908    5196 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:50.822915    5196 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:50.824822    5196 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:50.827975    5196 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:50.831100    5196 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:50.831129    5196 cni.go:84] Creating CNI manager for ""
	I0524 12:34:50.831137    5196 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:34:50.831142    5196 start_flags.go:319] config:
	{Name:old-k8s-version-787000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-787000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:50.831231    5196 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:50.838949    5196 out.go:177] * Starting control plane node old-k8s-version-787000 in cluster old-k8s-version-787000
	I0524 12:34:50.842849    5196 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 12:34:50.842872    5196 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 12:34:50.842884    5196 cache.go:57] Caching tarball of preloaded images
	I0524 12:34:50.842946    5196 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:34:50.842951    5196 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0524 12:34:50.843023    5196 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/old-k8s-version-787000/config.json ...
	I0524 12:34:50.843037    5196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/old-k8s-version-787000/config.json: {Name:mk97ef4335e36d82795c776aecfe79dc1fb099b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:50.843238    5196 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:50.843255    5196 start.go:364] acquiring machines lock for old-k8s-version-787000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:52.750667    5196 start.go:368] acquired machines lock for "old-k8s-version-787000" in 1.907368542s
	I0524 12:34:52.750909    5196 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-787000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-787000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:52.751137    5196 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:52.760687    5196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:34:52.806998    5196 start.go:159] libmachine.API.Create for "old-k8s-version-787000" (driver="qemu2")
	I0524 12:34:52.807046    5196 client.go:168] LocalClient.Create starting
	I0524 12:34:52.807153    5196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:52.807191    5196 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:52.807209    5196 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:52.807280    5196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:52.807307    5196 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:52.807321    5196 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:52.808018    5196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:52.931281    5196 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:52.996984    5196 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:52.996997    5196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:52.997182    5196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:34:53.006532    5196 main.go:141] libmachine: STDOUT: 
	I0524 12:34:53.006562    5196 main.go:141] libmachine: STDERR: 
	I0524 12:34:53.006626    5196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2 +20000M
	I0524 12:34:53.014536    5196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:53.014555    5196 main.go:141] libmachine: STDERR: 
	I0524 12:34:53.014594    5196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:34:53.014610    5196 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:53.014648    5196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:64:e3:45:ef:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:34:53.016546    5196 main.go:141] libmachine: STDOUT: 
	I0524 12:34:53.016559    5196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:53.016576    5196 client.go:171] LocalClient.Create took 209.5245ms
	I0524 12:34:55.018613    5196 start.go:128] duration metric: createHost completed in 2.267487709s
	I0524 12:34:55.018630    5196 start.go:83] releasing machines lock for "old-k8s-version-787000", held for 2.267948375s
	W0524 12:34:55.018647    5196 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:55.035606    5196 out.go:177] * Deleting "old-k8s-version-787000" in qemu2 ...
	W0524 12:34:55.044840    5196 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:55.044851    5196 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:00.045466    5196 start.go:364] acquiring machines lock for old-k8s-version-787000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:00.045999    5196 start.go:368] acquired machines lock for "old-k8s-version-787000" in 437.125µs
	I0524 12:35:00.046099    5196 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-787000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-787000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:00.046502    5196 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:00.056165    5196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:00.104678    5196 start.go:159] libmachine.API.Create for "old-k8s-version-787000" (driver="qemu2")
	I0524 12:35:00.104742    5196 client.go:168] LocalClient.Create starting
	I0524 12:35:00.104861    5196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:00.104910    5196 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:00.104927    5196 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:00.105007    5196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:00.105041    5196 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:00.105054    5196 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:00.105615    5196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:00.261646    5196 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:00.318954    5196 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:00.318959    5196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:00.319095    5196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:35:00.327751    5196 main.go:141] libmachine: STDOUT: 
	I0524 12:35:00.327766    5196 main.go:141] libmachine: STDERR: 
	I0524 12:35:00.327812    5196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2 +20000M
	I0524 12:35:00.335372    5196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:00.335390    5196 main.go:141] libmachine: STDERR: 
	I0524 12:35:00.335407    5196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:35:00.335413    5196 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:00.335449    5196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:37:e1:d1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:35:00.337089    5196 main.go:141] libmachine: STDOUT: 
	I0524 12:35:00.337104    5196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:00.337116    5196 client.go:171] LocalClient.Create took 232.371083ms
	I0524 12:35:02.337542    5196 start.go:128] duration metric: createHost completed in 2.291027667s
	I0524 12:35:02.337667    5196 start.go:83] releasing machines lock for "old-k8s-version-787000", held for 2.291626334s
	W0524 12:35:02.338177    5196 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-787000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-787000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:02.347778    5196 out.go:177] 
	W0524 12:35:02.352229    5196 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:02.352278    5196 out.go:239] * 
	* 
	W0524 12:35:02.355145    5196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:02.368735    5196 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (64.29775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.853861792s)

                                                
                                                
-- stdout --
	* [no-preload-601000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-601000 in cluster no-preload-601000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:34:54.917417    5306 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:34:54.917574    5306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:54.917577    5306 out.go:309] Setting ErrFile to fd 2...
	I0524 12:34:54.917580    5306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:34:54.917659    5306 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:34:54.918720    5306 out.go:303] Setting JSON to false
	I0524 12:34:54.933834    5306 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3865,"bootTime":1684953029,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:34:54.933906    5306 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:34:54.938718    5306 out.go:177] * [no-preload-601000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:34:54.946761    5306 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:34:54.946837    5306 notify.go:220] Checking for updates...
	I0524 12:34:54.953688    5306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:34:54.956722    5306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:34:54.959661    5306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:34:54.962674    5306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:34:54.965727    5306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:34:54.968836    5306 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:34:54.968923    5306 config.go:182] Loaded profile config "old-k8s-version-787000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0524 12:34:54.968941    5306 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:34:54.973672    5306 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:34:54.980667    5306 start.go:295] selected driver: qemu2
	I0524 12:34:54.980674    5306 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:34:54.980681    5306 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:34:54.982593    5306 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:34:54.985647    5306 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:34:54.988773    5306 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:34:54.988791    5306 cni.go:84] Creating CNI manager for ""
	I0524 12:34:54.988806    5306 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:34:54.988810    5306 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:34:54.988815    5306 start_flags.go:319] config:
	{Name:no-preload-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-601000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:34:54.988892    5306 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:54.997607    5306 out.go:177] * Starting control plane node no-preload-601000 in cluster no-preload-601000
	I0524 12:34:55.000546    5306 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:34:55.000617    5306 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/no-preload-601000/config.json ...
	I0524 12:34:55.000633    5306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/no-preload-601000/config.json: {Name:mkee044790fdcf96a0c8118d87672cf94c0ff324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:34:55.000666    5306 cache.go:107] acquiring lock: {Name:mk33aa16ac70bea4b5e4208a223ddc998c6f47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000673    5306 cache.go:107] acquiring lock: {Name:mk4e07ad114f29d0925a9cb771b427e5c418dbb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000666    5306 cache.go:107] acquiring lock: {Name:mk83f238773193a6319b253ac5914ab99560dbf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000715    5306 cache.go:107] acquiring lock: {Name:mk578670fae400ebf259a24fe1f33098df7022d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000737    5306 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0524 12:34:55.000743    5306 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.458µs
	I0524 12:34:55.000750    5306 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0524 12:34:55.000758    5306 cache.go:107] acquiring lock: {Name:mkea3c27bdedb4db23d18f5fdb9e5f8d27a968b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000833    5306 cache.go:107] acquiring lock: {Name:mk6815076538afdb2c63860b3ae3741a8ecd2119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000907    5306 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.2
	I0524 12:34:55.000915    5306 cache.go:107] acquiring lock: {Name:mkeb7ca46450943839c26fc40c9583287f12fc22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000924    5306 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0524 12:34:55.000937    5306 cache.go:107] acquiring lock: {Name:mke38817bd25b34e09076823cad5b1482ebe35c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:34:55.000952    5306 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:34:55.001034    5306 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.2
	I0524 12:34:55.001076    5306 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0524 12:34:55.000953    5306 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0524 12:34:55.001026    5306 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 12:34:55.001126    5306 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.2
	I0524 12:34:55.001122    5306 start.go:364] acquiring machines lock for no-preload-601000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:34:55.018656    5306 start.go:368] acquired machines lock for "no-preload-601000" in 17.507917ms
	I0524 12:34:55.018681    5306 start.go:93] Provisioning new machine with config: &{Name:no-preload-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-601000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:34:55.018732    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:34:55.026887    5306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:34:55.025070    5306 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0524 12:34:55.033308    5306 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.2
	I0524 12:34:55.033728    5306 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0524 12:34:55.035849    5306 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.2
	I0524 12:34:55.035983    5306 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 12:34:55.037971    5306 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0524 12:34:55.038030    5306 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.2
	I0524 12:34:55.047311    5306 start.go:159] libmachine.API.Create for "no-preload-601000" (driver="qemu2")
	I0524 12:34:55.047326    5306 client.go:168] LocalClient.Create starting
	I0524 12:34:55.047402    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:34:55.047420    5306 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:55.047428    5306 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:55.047472    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:34:55.047486    5306 main.go:141] libmachine: Decoding PEM data...
	I0524 12:34:55.047494    5306 main.go:141] libmachine: Parsing certificate...
	I0524 12:34:55.049901    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:34:55.176989    5306 main.go:141] libmachine: Creating SSH key...
	I0524 12:34:55.304763    5306 main.go:141] libmachine: Creating Disk image...
	I0524 12:34:55.304784    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:34:55.304955    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:34:55.314282    5306 main.go:141] libmachine: STDOUT: 
	I0524 12:34:55.314301    5306 main.go:141] libmachine: STDERR: 
	I0524 12:34:55.314367    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2 +20000M
	I0524 12:34:55.321993    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:34:55.322007    5306 main.go:141] libmachine: STDERR: 
	I0524 12:34:55.322036    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:34:55.322049    5306 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:34:55.322094    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:94:39:41:3c:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:34:55.323544    5306 main.go:141] libmachine: STDOUT: 
	I0524 12:34:55.323557    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:34:55.323575    5306 client.go:171] LocalClient.Create took 276.247ms
	I0524 12:34:56.210242    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0524 12:34:56.224110    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2
	I0524 12:34:56.303267    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0524 12:34:56.435374    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2
	I0524 12:34:56.466820    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0524 12:34:56.466829    5306 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.466129667s
	I0524 12:34:56.466838    5306 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0524 12:34:56.487572    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2
	I0524 12:34:56.665173    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0524 12:34:56.856297    5306 cache.go:162] opening:  /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2
	I0524 12:34:57.323927    5306 start.go:128] duration metric: createHost completed in 2.305188416s
	I0524 12:34:57.323974    5306 start.go:83] releasing machines lock for "no-preload-601000", held for 2.305327458s
	W0524 12:34:57.324027    5306 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:57.337245    5306 out.go:177] * Deleting "no-preload-601000" in qemu2 ...
	W0524 12:34:57.360120    5306 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:34:57.360162    5306 start.go:702] Will try again in 5 seconds ...
	I0524 12:34:57.822292    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0524 12:34:57.822347    5306 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.821498916s
	I0524 12:34:57.822376    5306 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0524 12:34:59.631206    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0524 12:34:59.631259    5306 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 4.63065025s
	I0524 12:34:59.631292    5306 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0524 12:34:59.858943    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0524 12:34:59.858999    5306 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 4.858220541s
	I0524 12:34:59.859041    5306 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0524 12:35:00.455525    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0524 12:35:00.455538    5306 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 5.454951708s
	I0524 12:35:00.455551    5306 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0524 12:35:00.668825    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0524 12:35:00.668864    5306 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 5.668098042s
	I0524 12:35:00.668888    5306 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0524 12:35:02.368995    5306 start.go:364] acquiring machines lock for no-preload-601000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:02.369655    5306 start.go:368] acquired machines lock for "no-preload-601000" in 530.083µs
	I0524 12:35:02.369794    5306 start.go:93] Provisioning new machine with config: &{Name:no-preload-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-601000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:02.370036    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:02.380861    5306 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:02.421961    5306 start.go:159] libmachine.API.Create for "no-preload-601000" (driver="qemu2")
	I0524 12:35:02.422001    5306 client.go:168] LocalClient.Create starting
	I0524 12:35:02.422092    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:02.422132    5306 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:02.422148    5306 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:02.422230    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:02.422256    5306 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:02.422273    5306 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:02.422763    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:02.567539    5306 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:02.675423    5306 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:02.675433    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:02.675604    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:35:02.684768    5306 main.go:141] libmachine: STDOUT: 
	I0524 12:35:02.684789    5306 main.go:141] libmachine: STDERR: 
	I0524 12:35:02.684863    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2 +20000M
	I0524 12:35:02.693102    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:02.693123    5306 main.go:141] libmachine: STDERR: 
	I0524 12:35:02.693140    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:35:02.693150    5306 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:02.693205    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:59:79:07:68:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:35:02.695024    5306 main.go:141] libmachine: STDOUT: 
	I0524 12:35:02.695040    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:02.695051    5306 client.go:171] LocalClient.Create took 273.049ms
	I0524 12:35:04.605140    5306 cache.go:157] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0524 12:35:04.605242    5306 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 9.604573666s
	I0524 12:35:04.605267    5306 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0524 12:35:04.605312    5306 cache.go:87] Successfully saved all images to host disk.
	I0524 12:35:04.697207    5306 start.go:128] duration metric: createHost completed in 2.327108s
	I0524 12:35:04.697255    5306 start.go:83] releasing machines lock for "no-preload-601000", held for 2.32759725s
	W0524 12:35:04.697657    5306 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:04.707179    5306 out.go:177] 
	W0524 12:35:04.718575    5306 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:04.718600    5306 out.go:239] * 
	* 
	W0524 12:35:04.721073    5306 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:04.729086    5306 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (58.188875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-787000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-787000 create -f testdata/busybox.yaml: exit status 1 (32.19975ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-787000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-787000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (30.955167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (30.190667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-787000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-787000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-787000 describe deploy/metrics-server -n kube-system: exit status 1 (27.332125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-787000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-787000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (28.602875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.976090333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-787000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-787000 in cluster old-k8s-version-787000
	* Restarting existing qemu2 VM for "old-k8s-version-787000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-787000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:02.840025    5450 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:02.840183    5450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:02.840186    5450 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:02.840189    5450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:02.840254    5450 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:02.841228    5450 out.go:303] Setting JSON to false
	I0524 12:35:02.856593    5450 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3873,"bootTime":1684953029,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:02.856665    5450 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:02.860804    5450 out.go:177] * [old-k8s-version-787000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:02.870584    5450 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:02.867784    5450 notify.go:220] Checking for updates...
	I0524 12:35:02.878628    5450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:02.885684    5450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:02.893694    5450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:02.899716    5450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:02.907657    5450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:02.911954    5450 config.go:182] Loaded profile config "old-k8s-version-787000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0524 12:35:02.915619    5450 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0524 12:35:02.919654    5450 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:02.922665    5450 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:35:02.930747    5450 start.go:295] selected driver: qemu2
	I0524 12:35:02.930753    5450 start.go:870] validating driver "qemu2" against &{Name:old-k8s-version-787000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-787000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:02.930821    5450 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:02.932986    5450 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:02.933015    5450 cni.go:84] Creating CNI manager for ""
	I0524 12:35:02.933025    5450 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 12:35:02.933030    5450 start_flags.go:319] config:
	{Name:old-k8s-version-787000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-787000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:02.933126    5450 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:02.941669    5450 out.go:177] * Starting control plane node old-k8s-version-787000 in cluster old-k8s-version-787000
	I0524 12:35:02.944695    5450 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 12:35:02.944723    5450 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 12:35:02.944741    5450 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:02.944811    5450 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:02.944818    5450 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0524 12:35:02.944885    5450 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/old-k8s-version-787000/config.json ...
	I0524 12:35:02.945148    5450 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:02.945162    5450 start.go:364] acquiring machines lock for old-k8s-version-787000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:04.697396    5450 start.go:368] acquired machines lock for "old-k8s-version-787000" in 1.752222709s
	I0524 12:35:04.697520    5450 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:04.697538    5450 fix.go:55] fixHost starting: 
	I0524 12:35:04.698203    5450 fix.go:103] recreateIfNeeded on old-k8s-version-787000: state=Stopped err=<nil>
	W0524 12:35:04.698244    5450 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:04.715149    5450 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-787000" ...
	I0524 12:35:04.721358    5450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:37:e1:d1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:35:04.733370    5450 main.go:141] libmachine: STDOUT: 
	I0524 12:35:04.733530    5450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:04.733691    5450 fix.go:57] fixHost completed within 36.147292ms
	I0524 12:35:04.733705    5450 start.go:83] releasing machines lock for "old-k8s-version-787000", held for 36.261125ms
	W0524 12:35:04.734095    5450 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:04.734362    5450 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:04.734380    5450 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:09.736580    5450 start.go:364] acquiring machines lock for old-k8s-version-787000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:09.737018    5450 start.go:368] acquired machines lock for "old-k8s-version-787000" in 335.625µs
	I0524 12:35:09.737135    5450 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:09.737160    5450 fix.go:55] fixHost starting: 
	I0524 12:35:09.738094    5450 fix.go:103] recreateIfNeeded on old-k8s-version-787000: state=Stopped err=<nil>
	W0524 12:35:09.738120    5450 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:09.741225    5450 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-787000" ...
	I0524 12:35:09.748176    5450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:37:e1:d1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/old-k8s-version-787000/disk.qcow2
	I0524 12:35:09.757519    5450 main.go:141] libmachine: STDOUT: 
	I0524 12:35:09.757575    5450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:09.757650    5450 fix.go:57] fixHost completed within 20.495583ms
	I0524 12:35:09.757668    5450 start.go:83] releasing machines lock for "old-k8s-version-787000", held for 20.628333ms
	W0524 12:35:09.758052    5450 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-787000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-787000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:09.764980    5450 out.go:177] 
	W0524 12:35:09.769137    5450 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:09.769170    5450 out.go:239] * 
	* 
	W0524 12:35:09.771663    5450 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:09.781932    5450 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-787000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (66.132542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-601000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-601000 create -f testdata/busybox.yaml: exit status 1 (29.413375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-601000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (27.305625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (27.479792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-601000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-601000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-601000 describe deploy/metrics-server -n kube-system: exit status 1 (25.825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-601000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (27.269583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.155557375s)

                                                
                                                
-- stdout --
	* [no-preload-601000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-601000 in cluster no-preload-601000
	* Restarting existing qemu2 VM for "no-preload-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:05.168704    5476 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:05.168809    5476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:05.168811    5476 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:05.168814    5476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:05.168882    5476 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:05.169890    5476 out.go:303] Setting JSON to false
	I0524 12:35:05.185175    5476 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3876,"bootTime":1684953029,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:05.185257    5476 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:05.189597    5476 out.go:177] * [no-preload-601000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:05.192634    5476 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:05.196645    5476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:05.192692    5476 notify.go:220] Checking for updates...
	I0524 12:35:05.199544    5476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:05.203617    5476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:05.206600    5476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:05.209555    5476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:05.212882    5476 config.go:182] Loaded profile config "no-preload-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:05.213093    5476 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:05.216576    5476 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:35:05.223553    5476 start.go:295] selected driver: qemu2
	I0524 12:35:05.223559    5476 start.go:870] validating driver "qemu2" against &{Name:no-preload-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-601000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:05.223641    5476 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:05.225427    5476 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:05.225452    5476 cni.go:84] Creating CNI manager for ""
	I0524 12:35:05.225460    5476 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:05.225465    5476 start_flags.go:319] config:
	{Name:no-preload-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-601000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:05.225527    5476 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.234514    5476 out.go:177] * Starting control plane node no-preload-601000 in cluster no-preload-601000
	I0524 12:35:05.238533    5476 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:05.238899    5476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/no-preload-601000/config.json ...
	I0524 12:35:05.239204    5476 cache.go:107] acquiring lock: {Name:mk83f238773193a6319b253ac5914ab99560dbf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239239    5476 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:05.239231    5476 cache.go:107] acquiring lock: {Name:mk33aa16ac70bea4b5e4208a223ddc998c6f47af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239258    5476 start.go:364] acquiring machines lock for no-preload-601000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:05.239278    5476 cache.go:107] acquiring lock: {Name:mk4e07ad114f29d0925a9cb771b427e5c418dbb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239339    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0524 12:35:05.239343    5476 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 66.042µs
	I0524 12:35:05.239353    5476 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0524 12:35:05.239360    5476 start.go:368] acquired machines lock for "no-preload-601000" in 90.792µs
	I0524 12:35:05.239358    5476 cache.go:107] acquiring lock: {Name:mkeb7ca46450943839c26fc40c9583287f12fc22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239367    5476 cache.go:107] acquiring lock: {Name:mk578670fae400ebf259a24fe1f33098df7022d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239381    5476 cache.go:107] acquiring lock: {Name:mkea3c27bdedb4db23d18f5fdb9e5f8d27a968b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239392    5476 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:05.239400    5476 fix.go:55] fixHost starting: 
	I0524 12:35:05.239431    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0524 12:35:05.239435    5476 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 207.917µs
	I0524 12:35:05.239441    5476 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0524 12:35:05.239201    5476 cache.go:107] acquiring lock: {Name:mk6815076538afdb2c63860b3ae3741a8ecd2119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239494    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0524 12:35:05.239498    5476 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 366.208µs
	I0524 12:35:05.239501    5476 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0524 12:35:05.239508    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0524 12:35:05.239529    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0524 12:35:05.239520    5476 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 138.666µs
	I0524 12:35:05.239536    5476 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 179.083µs
	I0524 12:35:05.239543    5476 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0524 12:35:05.239539    5476 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0524 12:35:05.239561    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0524 12:35:05.239650    5476 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 520.041µs
	I0524 12:35:05.239359    5476 cache.go:107] acquiring lock: {Name:mke38817bd25b34e09076823cad5b1482ebe35c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:05.239666    5476 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0524 12:35:05.239681    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0524 12:35:05.239701    5476 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 333.292µs
	I0524 12:35:05.239706    5476 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0524 12:35:05.239842    5476 cache.go:115] /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0524 12:35:05.239853    5476 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 491.583µs
	I0524 12:35:05.239856    5476 fix.go:103] recreateIfNeeded on no-preload-601000: state=Stopped err=<nil>
	W0524 12:35:05.239868    5476 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:05.239862    5476 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0524 12:35:05.239882    5476 cache.go:87] Successfully saved all images to host disk.
	I0524 12:35:05.247605    5476 out.go:177] * Restarting existing qemu2 VM for "no-preload-601000" ...
	I0524 12:35:05.251594    5476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:59:79:07:68:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:35:05.253642    5476 main.go:141] libmachine: STDOUT: 
	I0524 12:35:05.253655    5476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:05.253686    5476 fix.go:57] fixHost completed within 14.28625ms
	I0524 12:35:05.253690    5476 start.go:83] releasing machines lock for "no-preload-601000", held for 14.32075ms
	W0524 12:35:05.253699    5476 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:05.253760    5476 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:05.253769    5476 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:10.254651    5476 start.go:364] acquiring machines lock for no-preload-601000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:10.254760    5476 start.go:368] acquired machines lock for "no-preload-601000" in 90.916µs
	I0524 12:35:10.254786    5476 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:10.254792    5476 fix.go:55] fixHost starting: 
	I0524 12:35:10.254933    5476 fix.go:103] recreateIfNeeded on no-preload-601000: state=Stopped err=<nil>
	W0524 12:35:10.254938    5476 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:10.262099    5476 out.go:177] * Restarting existing qemu2 VM for "no-preload-601000" ...
	I0524 12:35:10.265259    5476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:59:79:07:68:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/no-preload-601000/disk.qcow2
	I0524 12:35:10.267163    5476 main.go:141] libmachine: STDOUT: 
	I0524 12:35:10.267176    5476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:10.267195    5476 fix.go:57] fixHost completed within 12.402791ms
	I0524 12:35:10.267200    5476 start.go:83] releasing machines lock for "no-preload-601000", held for 12.432791ms
	W0524 12:35:10.267285    5476 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:10.273180    5476 out.go:177] 
	W0524 12:35:10.277212    5476 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:10.277221    5476 out.go:239] * 
	* 
	W0524 12:35:10.277704    5476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:10.289125    5476 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-601000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (28.842208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-787000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (31.192833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-787000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-787000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-787000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.984542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-787000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-787000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (27.885958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-787000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-787000 "sudo crictl images -o json": exit status 89 (37.484708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-787000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-787000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-787000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (27.25225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-787000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-787000 --alsologtostderr -v=1: exit status 89 (41.251584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-787000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:10.036294    5494 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:10.036716    5494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.036721    5494 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:10.036724    5494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.036838    5494 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:10.037069    5494 out.go:303] Setting JSON to false
	I0524 12:35:10.037077    5494 mustload.go:65] Loading cluster: old-k8s-version-787000
	I0524 12:35:10.037247    5494 config.go:182] Loaded profile config "old-k8s-version-787000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0524 12:35:10.042033    5494 out.go:177] * The control plane node must be running for this command
	I0524 12:35:10.046331    5494 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-787000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-787000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (27.324708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (27.18175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-787000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-601000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (29.229583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-601000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.601334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (28.824166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-601000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-601000 "sudo crictl images -o json": exit status 89 (45.81125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-601000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-601000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-601000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (29.295542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-601000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-601000 --alsologtostderr -v=1: exit status 89 (40.273958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-601000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:10.516624    5529 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:10.516740    5529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.516743    5529 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:10.516745    5529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.516820    5529 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:10.517021    5529 out.go:303] Setting JSON to false
	I0524 12:35:10.517030    5529 mustload.go:65] Loading cluster: no-preload-601000
	I0524 12:35:10.517215    5529 config.go:182] Loaded profile config "no-preload-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:10.521075    5529 out.go:177] * The control plane node must be running for this command
	I0524 12:35:10.525238    5529 out.go:177]   To start a cluster, run: "minikube start -p no-preload-601000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-601000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (36.73425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (32.429167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.746166583s)

                                                
                                                
-- stdout --
	* [embed-certs-989000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-989000 in cluster embed-certs-989000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:10.537339    5531 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:10.537476    5531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.537479    5531 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:10.537482    5531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:10.537556    5531 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:10.538552    5531 out.go:303] Setting JSON to false
	I0524 12:35:10.555724    5531 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3881,"bootTime":1684953029,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:10.555802    5531 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:10.560176    5531 out.go:177] * [embed-certs-989000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:10.573157    5531 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:10.570066    5531 notify.go:220] Checking for updates...
	I0524 12:35:10.579994    5531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:10.584177    5531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:10.587206    5531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:10.590144    5531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:10.599183    5531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:10.604307    5531 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:10.604395    5531 config.go:182] Loaded profile config "no-preload-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:10.604417    5531 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:10.608142    5531 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:35:10.614050    5531 start.go:295] selected driver: qemu2
	I0524 12:35:10.614057    5531 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:35:10.614063    5531 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:10.616026    5531 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:35:10.620168    5531 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:35:10.624226    5531 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:10.624245    5531 cni.go:84] Creating CNI manager for ""
	I0524 12:35:10.624252    5531 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:10.624261    5531 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:35:10.624268    5531 start_flags.go:319] config:
	{Name:embed-certs-989000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-989000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:10.624358    5531 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:10.632185    5531 out.go:177] * Starting control plane node embed-certs-989000 in cluster embed-certs-989000
	I0524 12:35:10.636093    5531 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:10.636116    5531 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:10.636126    5531 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:10.636180    5531 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:10.636184    5531 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:10.636253    5531 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/embed-certs-989000/config.json ...
	I0524 12:35:10.636263    5531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/embed-certs-989000/config.json: {Name:mkff2605ef58b9bd65f99aec8ce258454f892026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:35:10.636465    5531 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:10.636478    5531 start.go:364] acquiring machines lock for embed-certs-989000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:10.636499    5531 start.go:368] acquired machines lock for "embed-certs-989000" in 16.333µs
	I0524 12:35:10.636511    5531 start.go:93] Provisioning new machine with config: &{Name:embed-certs-989000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-989000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:10.636542    5531 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:10.644140    5531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:10.658690    5531 start.go:159] libmachine.API.Create for "embed-certs-989000" (driver="qemu2")
	I0524 12:35:10.658711    5531 client.go:168] LocalClient.Create starting
	I0524 12:35:10.658777    5531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:10.658810    5531 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:10.658821    5531 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:10.658863    5531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:10.658879    5531 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:10.658891    5531 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:10.659259    5531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:10.819158    5531 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:10.909984    5531 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:10.909994    5531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:10.910137    5531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:10.918774    5531 main.go:141] libmachine: STDOUT: 
	I0524 12:35:10.918796    5531 main.go:141] libmachine: STDERR: 
	I0524 12:35:10.918869    5531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2 +20000M
	I0524 12:35:10.926381    5531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:10.926397    5531 main.go:141] libmachine: STDERR: 
	I0524 12:35:10.926418    5531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:10.926429    5531 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:10.926466    5531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:37:e9:98:03:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:10.928007    5531 main.go:141] libmachine: STDOUT: 
	I0524 12:35:10.928020    5531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:10.928040    5531 client.go:171] LocalClient.Create took 269.326291ms
	I0524 12:35:12.930260    5531 start.go:128] duration metric: createHost completed in 2.293713708s
	I0524 12:35:12.930333    5531 start.go:83] releasing machines lock for "embed-certs-989000", held for 2.293849875s
	W0524 12:35:12.930387    5531 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:12.947485    5531 out.go:177] * Deleting "embed-certs-989000" in qemu2 ...
	W0524 12:35:12.963183    5531 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:12.963213    5531 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:17.965415    5531 start.go:364] acquiring machines lock for embed-certs-989000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:17.965847    5531 start.go:368] acquired machines lock for "embed-certs-989000" in 339.042µs
	I0524 12:35:17.965991    5531 start.go:93] Provisioning new machine with config: &{Name:embed-certs-989000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-989000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:17.966230    5531 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:17.976139    5531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:18.023248    5531 start.go:159] libmachine.API.Create for "embed-certs-989000" (driver="qemu2")
	I0524 12:35:18.023280    5531 client.go:168] LocalClient.Create starting
	I0524 12:35:18.023425    5531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:18.023476    5531 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:18.023497    5531 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:18.023579    5531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:18.023615    5531 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:18.023632    5531 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:18.024140    5531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:18.152422    5531 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:18.197467    5531 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:18.197473    5531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:18.197618    5531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:18.206052    5531 main.go:141] libmachine: STDOUT: 
	I0524 12:35:18.206073    5531 main.go:141] libmachine: STDERR: 
	I0524 12:35:18.206136    5531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2 +20000M
	I0524 12:35:18.213234    5531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:18.213247    5531 main.go:141] libmachine: STDERR: 
	I0524 12:35:18.213258    5531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:18.213265    5531 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:18.213308    5531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ab:5c:24:f6:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:18.214801    5531 main.go:141] libmachine: STDOUT: 
	I0524 12:35:18.214818    5531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:18.214838    5531 client.go:171] LocalClient.Create took 191.55375ms
	I0524 12:35:20.217019    5531 start.go:128] duration metric: createHost completed in 2.2507715s
	I0524 12:35:20.217077    5531 start.go:83] releasing machines lock for "embed-certs-989000", held for 2.251230708s
	W0524 12:35:20.217665    5531 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:20.226284    5531 out.go:177] 
	W0524 12:35:20.231585    5531 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:20.231631    5531 out.go:239] * 
	* 
	W0524 12:35:20.234084    5531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:20.246112    5531 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (56.304167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.24104575s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-324000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-324000 in cluster default-k8s-diff-port-324000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-324000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:11.334949    5573 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:11.335064    5573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:11.335067    5573 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:11.335069    5573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:11.335140    5573 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:11.336148    5573 out.go:303] Setting JSON to false
	I0524 12:35:11.351269    5573 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3882,"bootTime":1684953029,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:11.351359    5573 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:11.360396    5573 out.go:177] * [default-k8s-diff-port-324000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:11.364478    5573 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:11.364518    5573 notify.go:220] Checking for updates...
	I0524 12:35:11.370444    5573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:11.373465    5573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:11.376443    5573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:11.379445    5573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:11.382464    5573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:11.385755    5573 config.go:182] Loaded profile config "embed-certs-989000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:11.385815    5573 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:11.385836    5573 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:11.389450    5573 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:35:11.396387    5573 start.go:295] selected driver: qemu2
	I0524 12:35:11.396393    5573 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:35:11.396399    5573 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:11.398354    5573 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 12:35:11.402386    5573 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:35:11.405555    5573 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:11.405572    5573 cni.go:84] Creating CNI manager for ""
	I0524 12:35:11.405591    5573 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:11.405595    5573 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:35:11.405600    5573 start_flags.go:319] config:
	{Name:default-k8s-diff-port-324000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-324000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0524 12:35:11.405677    5573 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:11.414396    5573 out.go:177] * Starting control plane node default-k8s-diff-port-324000 in cluster default-k8s-diff-port-324000
	I0524 12:35:11.418348    5573 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:11.418372    5573 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:11.418384    5573 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:11.418455    5573 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:11.418465    5573 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:11.418521    5573 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/default-k8s-diff-port-324000/config.json ...
	I0524 12:35:11.418536    5573 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/default-k8s-diff-port-324000/config.json: {Name:mk6b5b867a9b113d7a4fc7cc2e899ce01a186715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:35:11.418742    5573 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:11.418757    5573 start.go:364] acquiring machines lock for default-k8s-diff-port-324000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:12.930479    5573 start.go:368] acquired machines lock for "default-k8s-diff-port-324000" in 1.511677875s
	I0524 12:35:12.930603    5573 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-324000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:12.930833    5573 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:12.939438    5573 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:12.984362    5573 start.go:159] libmachine.API.Create for "default-k8s-diff-port-324000" (driver="qemu2")
	I0524 12:35:12.984420    5573 client.go:168] LocalClient.Create starting
	I0524 12:35:12.984555    5573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:12.984596    5573 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:12.984623    5573 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:12.984698    5573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:12.984729    5573 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:12.984741    5573 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:12.985421    5573 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:13.109855    5573 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:13.177122    5573 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:13.177127    5573 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:13.177375    5573 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:13.186026    5573 main.go:141] libmachine: STDOUT: 
	I0524 12:35:13.186038    5573 main.go:141] libmachine: STDERR: 
	I0524 12:35:13.186087    5573 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2 +20000M
	I0524 12:35:13.193252    5573 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:13.193263    5573 main.go:141] libmachine: STDERR: 
	I0524 12:35:13.193277    5573 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:13.193284    5573 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:13.193316    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:63:e2:1b:65:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:13.194819    5573 main.go:141] libmachine: STDOUT: 
	I0524 12:35:13.194832    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:13.194854    5573 client.go:171] LocalClient.Create took 210.424ms
	I0524 12:35:15.196996    5573 start.go:128] duration metric: createHost completed in 2.266161875s
	I0524 12:35:15.197067    5573 start.go:83] releasing machines lock for "default-k8s-diff-port-324000", held for 2.266578292s
	W0524 12:35:15.197151    5573 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:15.204781    5573 out.go:177] * Deleting "default-k8s-diff-port-324000" in qemu2 ...
	W0524 12:35:15.225967    5573 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:15.225990    5573 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:20.226641    5573 start.go:364] acquiring machines lock for default-k8s-diff-port-324000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:20.226967    5573 start.go:368] acquired machines lock for "default-k8s-diff-port-324000" in 269.5µs
	I0524 12:35:20.227118    5573 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-324000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:20.227360    5573 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:20.239133    5573 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:20.284302    5573 start.go:159] libmachine.API.Create for "default-k8s-diff-port-324000" (driver="qemu2")
	I0524 12:35:20.284349    5573 client.go:168] LocalClient.Create starting
	I0524 12:35:20.284476    5573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:20.284526    5573 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:20.284541    5573 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:20.284607    5573 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:20.284634    5573 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:20.284647    5573 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:20.285193    5573 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:20.414371    5573 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:20.476762    5573 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:20.476771    5573 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:20.476929    5573 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:20.485826    5573 main.go:141] libmachine: STDOUT: 
	I0524 12:35:20.485841    5573 main.go:141] libmachine: STDERR: 
	I0524 12:35:20.485893    5573 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2 +20000M
	I0524 12:35:20.493962    5573 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:20.493977    5573 main.go:141] libmachine: STDERR: 
	I0524 12:35:20.493992    5573 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:20.494008    5573 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:20.494039    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e2:77:9e:41:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:20.495806    5573 main.go:141] libmachine: STDOUT: 
	I0524 12:35:20.495822    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:20.495839    5573 client.go:171] LocalClient.Create took 211.487542ms
	I0524 12:35:22.496769    5573 start.go:128] duration metric: createHost completed in 2.269360792s
	I0524 12:35:22.496841    5573 start.go:83] releasing machines lock for "default-k8s-diff-port-324000", held for 2.269875625s
	W0524 12:35:22.497339    5573 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-324000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-324000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:22.505729    5573 out.go:177] 
	W0524 12:35:22.517041    5573 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:22.517068    5573 out.go:239] * 
	* 
	W0524 12:35:22.519964    5573 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:22.531913    5573 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (63.310625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-989000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-989000 create -f testdata/busybox.yaml: exit status 1 (31.292959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-989000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-989000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (32.523542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (32.485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-989000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-989000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-989000 describe deploy/metrics-server -n kube-system: exit status 1 (26.742875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-989000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-989000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (27.1655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.923035083s)

                                                
                                                
-- stdout --
	* [embed-certs-989000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-989000 in cluster embed-certs-989000
	* Restarting existing qemu2 VM for "embed-certs-989000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-989000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:20.692932    5610 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:20.693051    5610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:20.693054    5610 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:20.693056    5610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:20.693120    5610 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:20.694063    5610 out.go:303] Setting JSON to false
	I0524 12:35:20.709220    5610 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3891,"bootTime":1684953029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:20.709294    5610 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:20.717180    5610 out.go:177] * [embed-certs-989000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:20.721224    5610 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:20.721250    5610 notify.go:220] Checking for updates...
	I0524 12:35:20.729186    5610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:20.732271    5610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:20.735368    5610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:20.738256    5610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:20.741241    5610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:20.744524    5610 config.go:182] Loaded profile config "embed-certs-989000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:20.744752    5610 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:20.749212    5610 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:35:20.755120    5610 start.go:295] selected driver: qemu2
	I0524 12:35:20.755127    5610 start.go:870] validating driver "qemu2" against &{Name:embed-certs-989000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-989000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:20.755213    5610 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:20.757144    5610 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:20.757165    5610 cni.go:84] Creating CNI manager for ""
	I0524 12:35:20.757174    5610 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:20.757183    5610 start_flags.go:319] config:
	{Name:embed-certs-989000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-989000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:20.757257    5610 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:20.764233    5610 out.go:177] * Starting control plane node embed-certs-989000 in cluster embed-certs-989000
	I0524 12:35:20.768221    5610 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:20.768242    5610 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:20.768254    5610 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:20.768322    5610 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:20.768327    5610 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:20.768386    5610 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/embed-certs-989000/config.json ...
	I0524 12:35:20.768751    5610 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:20.768763    5610 start.go:364] acquiring machines lock for embed-certs-989000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:22.497017    5610 start.go:368] acquired machines lock for "embed-certs-989000" in 1.728189417s
	I0524 12:35:22.497208    5610 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:22.497227    5610 fix.go:55] fixHost starting: 
	I0524 12:35:22.497959    5610 fix.go:103] recreateIfNeeded on embed-certs-989000: state=Stopped err=<nil>
	W0524 12:35:22.498003    5610 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:22.513659    5610 out.go:177] * Restarting existing qemu2 VM for "embed-certs-989000" ...
	I0524 12:35:22.521053    5610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ab:5c:24:f6:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:22.530635    5610 main.go:141] libmachine: STDOUT: 
	I0524 12:35:22.530702    5610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:22.530850    5610 fix.go:57] fixHost completed within 33.610209ms
	I0524 12:35:22.530880    5610 start.go:83] releasing machines lock for "embed-certs-989000", held for 33.8145ms
	W0524 12:35:22.530918    5610 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:22.531242    5610 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:22.531263    5610 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:27.533448    5610 start.go:364] acquiring machines lock for embed-certs-989000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:27.533853    5610 start.go:368] acquired machines lock for "embed-certs-989000" in 301.5µs
	I0524 12:35:27.533967    5610 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:27.533991    5610 fix.go:55] fixHost starting: 
	I0524 12:35:27.534693    5610 fix.go:103] recreateIfNeeded on embed-certs-989000: state=Stopped err=<nil>
	W0524 12:35:27.534719    5610 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:27.544493    5610 out.go:177] * Restarting existing qemu2 VM for "embed-certs-989000" ...
	I0524 12:35:27.547763    5610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ab:5c:24:f6:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/embed-certs-989000/disk.qcow2
	I0524 12:35:27.556582    5610 main.go:141] libmachine: STDOUT: 
	I0524 12:35:27.556644    5610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:27.556740    5610 fix.go:57] fixHost completed within 22.750708ms
	I0524 12:35:27.556762    5610 start.go:83] releasing machines lock for "embed-certs-989000", held for 22.887208ms
	W0524 12:35:27.557074    5610 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-989000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-989000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:27.563452    5610 out.go:177] 
	W0524 12:35:27.566668    5610 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:27.566693    5610 out.go:239] * 
	* 
	W0524 12:35:27.569286    5610 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:27.577338    5610 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-989000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (67.926042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-324000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-324000 create -f testdata/busybox.yaml: exit status 1 (29.1665ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-324000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-324000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (27.317292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (27.450708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-324000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-324000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-324000 describe deploy/metrics-server -n kube-system: exit status 1 (25.934833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-324000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-324000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (27.806792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.161753292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-324000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-324000 in cluster default-k8s-diff-port-324000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:22.987630    5634 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:22.987753    5634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:22.987756    5634 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:22.987759    5634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:22.987826    5634 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:22.988755    5634 out.go:303] Setting JSON to false
	I0524 12:35:23.003830    5634 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3893,"bootTime":1684953029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:23.003904    5634 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:23.008033    5634 out.go:177] * [default-k8s-diff-port-324000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:23.014978    5634 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:23.015011    5634 notify.go:220] Checking for updates...
	I0524 12:35:23.021752    5634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:23.024988    5634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:23.027971    5634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:23.031006    5634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:23.033962    5634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:23.037238    5634 config.go:182] Loaded profile config "default-k8s-diff-port-324000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:23.037446    5634 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:23.041933    5634 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:35:23.048944    5634 start.go:295] selected driver: qemu2
	I0524 12:35:23.048950    5634 start.go:870] validating driver "qemu2" against &{Name:default-k8s-diff-port-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-324000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:23.049012    5634 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:23.051022    5634 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 12:35:23.051050    5634 cni.go:84] Creating CNI manager for ""
	I0524 12:35:23.051060    5634 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:23.051065    5634 start_flags.go:319] config:
	{Name:default-k8s-diff-port-324000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-3240
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:23.051144    5634 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:23.059952    5634 out.go:177] * Starting control plane node default-k8s-diff-port-324000 in cluster default-k8s-diff-port-324000
	I0524 12:35:23.063984    5634 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:23.064023    5634 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:23.064034    5634 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:23.064099    5634 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:23.064104    5634 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:23.064165    5634 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/default-k8s-diff-port-324000/config.json ...
	I0524 12:35:23.064479    5634 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:23.064491    5634 start.go:364] acquiring machines lock for default-k8s-diff-port-324000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:23.064516    5634 start.go:368] acquired machines lock for "default-k8s-diff-port-324000" in 19.375µs
	I0524 12:35:23.064525    5634 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:23.064529    5634 fix.go:55] fixHost starting: 
	I0524 12:35:23.064645    5634 fix.go:103] recreateIfNeeded on default-k8s-diff-port-324000: state=Stopped err=<nil>
	W0524 12:35:23.064653    5634 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:23.072932    5634 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-324000" ...
	I0524 12:35:23.076935    5634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e2:77:9e:41:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:23.078741    5634 main.go:141] libmachine: STDOUT: 
	I0524 12:35:23.078759    5634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:23.078789    5634 fix.go:57] fixHost completed within 14.25875ms
	I0524 12:35:23.078794    5634 start.go:83] releasing machines lock for "default-k8s-diff-port-324000", held for 14.274375ms
	W0524 12:35:23.078801    5634 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:23.078872    5634 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:23.078877    5634 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:28.080276    5634 start.go:364] acquiring machines lock for default-k8s-diff-port-324000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:28.080343    5634 start.go:368] acquired machines lock for "default-k8s-diff-port-324000" in 47.916µs
	I0524 12:35:28.080368    5634 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:28.080372    5634 fix.go:55] fixHost starting: 
	I0524 12:35:28.080506    5634 fix.go:103] recreateIfNeeded on default-k8s-diff-port-324000: state=Stopped err=<nil>
	W0524 12:35:28.080511    5634 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:28.082304    5634 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-324000" ...
	I0524 12:35:28.093109    5634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e2:77:9e:41:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/default-k8s-diff-port-324000/disk.qcow2
	I0524 12:35:28.095219    5634 main.go:141] libmachine: STDOUT: 
	I0524 12:35:28.095235    5634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:28.095256    5634 fix.go:57] fixHost completed within 14.884167ms
	I0524 12:35:28.095262    5634 start.go:83] releasing machines lock for "default-k8s-diff-port-324000", held for 14.914125ms
	W0524 12:35:28.095366    5634 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-324000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-324000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:28.102008    5634 out.go:177] 
	W0524 12:35:28.106174    5634 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:28.106186    5634 out.go:239] * 
	* 
	W0524 12:35:28.106641    5634 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:28.117985    5634 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-324000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (28.392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-989000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (31.161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-989000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-989000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-989000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.34775ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-989000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-989000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (28.0365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-989000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-989000 "sudo crictl images -o json": exit status 89 (38.36875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-989000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-989000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-989000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (27.514083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-989000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-989000 --alsologtostderr -v=1: exit status 89 (40.001875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-989000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:27.838825    5655 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:27.838973    5655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:27.838977    5655 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:27.838980    5655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:27.839052    5655 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:27.839244    5655 out.go:303] Setting JSON to false
	I0524 12:35:27.839256    5655 mustload.go:65] Loading cluster: embed-certs-989000
	I0524 12:35:27.839419    5655 config.go:182] Loaded profile config "embed-certs-989000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:27.844065    5655 out.go:177] * The control plane node must be running for this command
	I0524 12:35:27.848038    5655 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-989000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-989000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (27.356542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (27.330917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-989000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-324000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (28.582583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-324000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-324000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-324000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.987042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-324000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-324000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (29.499375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-324000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-324000 "sudo crictl images -o json": exit status 89 (40.260167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-324000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-324000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-324000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (29.676666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-324000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-324000 --alsologtostderr -v=1: exit status 89 (45.813459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-324000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:28.337859    5690 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:28.338022    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:28.338024    5690 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:28.338027    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:28.338100    5690 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:28.338314    5690 out.go:303] Setting JSON to false
	I0524 12:35:28.338323    5690 mustload.go:65] Loading cluster: default-k8s-diff-port-324000
	I0524 12:35:28.338508    5690 config.go:182] Loaded profile config "default-k8s-diff-port-324000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:28.343035    5690 out.go:177] * The control plane node must be running for this command
	I0524 12:35:28.351064    5690 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-324000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-324000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (32.360041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (33.844791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.86678425s)

                                                
                                                
-- stdout --
	* [newest-cni-758000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-758000 in cluster newest-cni-758000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-758000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:28.347308    5691 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:28.347543    5691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:28.347548    5691 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:28.347551    5691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:28.347624    5691 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:28.351438    5691 out.go:303] Setting JSON to false
	I0524 12:35:28.367851    5691 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3899,"bootTime":1684953029,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:28.367948    5691 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:28.372133    5691 out.go:177] * [newest-cni-758000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:28.378123    5691 notify.go:220] Checking for updates...
	I0524 12:35:28.382046    5691 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:28.392063    5691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:28.396028    5691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:28.399011    5691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:28.402100    5691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:28.406026    5691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:28.409309    5691 config.go:182] Loaded profile config "default-k8s-diff-port-324000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:28.409372    5691 config.go:182] Loaded profile config "multinode-636000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:28.409394    5691 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:28.412968    5691 out.go:177] * Using the qemu2 driver based on user configuration
	I0524 12:35:28.421992    5691 start.go:295] selected driver: qemu2
	I0524 12:35:28.422000    5691 start.go:870] validating driver "qemu2" against <nil>
	I0524 12:35:28.422008    5691 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:28.423923    5691 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0524 12:35:28.423950    5691 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0524 12:35:28.432084    5691 out.go:177] * Automatically selected the socket_vmnet network
	I0524 12:35:28.435248    5691 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0524 12:35:28.435265    5691 cni.go:84] Creating CNI manager for ""
	I0524 12:35:28.435274    5691 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:28.435278    5691 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 12:35:28.435285    5691 start_flags.go:319] config:
	{Name:newest-cni-758000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-758000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:28.435375    5691 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:28.443043    5691 out.go:177] * Starting control plane node newest-cni-758000 in cluster newest-cni-758000
	I0524 12:35:28.446967    5691 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:28.446998    5691 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:28.447010    5691 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:28.447093    5691 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:28.447098    5691 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:28.447155    5691 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/newest-cni-758000/config.json ...
	I0524 12:35:28.447166    5691 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/newest-cni-758000/config.json: {Name:mk2e1218eca96d2ae67fcbca616a021fdbc47d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 12:35:28.447351    5691 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:28.447364    5691 start.go:364] acquiring machines lock for newest-cni-758000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:28.447388    5691 start.go:368] acquired machines lock for "newest-cni-758000" in 19.708µs
	I0524 12:35:28.447400    5691 start.go:93] Provisioning new machine with config: &{Name:newest-cni-758000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-758000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:28.447429    5691 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:28.454001    5691 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:28.468217    5691 start.go:159] libmachine.API.Create for "newest-cni-758000" (driver="qemu2")
	I0524 12:35:28.468245    5691 client.go:168] LocalClient.Create starting
	I0524 12:35:28.468317    5691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:28.468341    5691 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:28.468356    5691 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:28.468398    5691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:28.468412    5691 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:28.468420    5691 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:28.468771    5691 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:28.629759    5691 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:28.707284    5691 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:28.707298    5691 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:28.707520    5691 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:28.716327    5691 main.go:141] libmachine: STDOUT: 
	I0524 12:35:28.716351    5691 main.go:141] libmachine: STDERR: 
	I0524 12:35:28.716422    5691 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2 +20000M
	I0524 12:35:28.723887    5691 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:28.723902    5691 main.go:141] libmachine: STDERR: 
	I0524 12:35:28.723925    5691 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:28.723938    5691 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:28.723985    5691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:1a:33:4f:de:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:28.725635    5691 main.go:141] libmachine: STDOUT: 
	I0524 12:35:28.725647    5691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:28.725666    5691 client.go:171] LocalClient.Create took 257.418125ms
	I0524 12:35:30.727839    5691 start.go:128] duration metric: createHost completed in 2.280404625s
	I0524 12:35:30.727929    5691 start.go:83] releasing machines lock for "newest-cni-758000", held for 2.280553458s
	W0524 12:35:30.728027    5691 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:30.734566    5691 out.go:177] * Deleting "newest-cni-758000" in qemu2 ...
	W0524 12:35:30.754801    5691 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:30.754832    5691 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:35.757046    5691 start.go:364] acquiring machines lock for newest-cni-758000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:35.757696    5691 start.go:368] acquired machines lock for "newest-cni-758000" in 534.042µs
	I0524 12:35:35.757800    5691 start.go:93] Provisioning new machine with config: &{Name:newest-cni-758000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-758000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 12:35:35.758109    5691 start.go:125] createHost starting for "" (driver="qemu2")
	I0524 12:35:35.766033    5691 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 12:35:35.811574    5691 start.go:159] libmachine.API.Create for "newest-cni-758000" (driver="qemu2")
	I0524 12:35:35.811629    5691 client.go:168] LocalClient.Create starting
	I0524 12:35:35.811754    5691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/ca.pem
	I0524 12:35:35.811792    5691 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:35.811809    5691 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:35.811876    5691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16573-1024/.minikube/certs/cert.pem
	I0524 12:35:35.811903    5691 main.go:141] libmachine: Decoding PEM data...
	I0524 12:35:35.811928    5691 main.go:141] libmachine: Parsing certificate...
	I0524 12:35:35.812463    5691 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso...
	I0524 12:35:35.934737    5691 main.go:141] libmachine: Creating SSH key...
	I0524 12:35:36.123982    5691 main.go:141] libmachine: Creating Disk image...
	I0524 12:35:36.123990    5691 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0524 12:35:36.124164    5691 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2.raw /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:36.133007    5691 main.go:141] libmachine: STDOUT: 
	I0524 12:35:36.133025    5691 main.go:141] libmachine: STDERR: 
	I0524 12:35:36.133072    5691 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2 +20000M
	I0524 12:35:36.140391    5691 main.go:141] libmachine: STDOUT: Image resized.
	
	I0524 12:35:36.140405    5691 main.go:141] libmachine: STDERR: 
	I0524 12:35:36.140416    5691 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:36.140422    5691 main.go:141] libmachine: Starting QEMU VM...
	I0524 12:35:36.140468    5691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:be:30:f5:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:36.142001    5691 main.go:141] libmachine: STDOUT: 
	I0524 12:35:36.142024    5691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:36.142036    5691 client.go:171] LocalClient.Create took 330.406084ms
	I0524 12:35:38.144218    5691 start.go:128] duration metric: createHost completed in 2.386104458s
	I0524 12:35:38.144272    5691 start.go:83] releasing machines lock for "newest-cni-758000", held for 2.386575625s
	W0524 12:35:38.144845    5691 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-758000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-758000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:38.155461    5691 out.go:177] 
	W0524 12:35:38.159678    5691 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:38.159704    5691 out.go:239] * 
	* 
	W0524 12:35:38.162184    5691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:38.171429    5691 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (66.590375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-758000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.173918s)

                                                
                                                
-- stdout --
	* [newest-cni-758000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-758000 in cluster newest-cni-758000
	* Restarting existing qemu2 VM for "newest-cni-758000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-758000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:38.491684    5737 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:38.491783    5737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:38.491787    5737 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:38.491790    5737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:38.491864    5737 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:38.493173    5737 out.go:303] Setting JSON to false
	I0524 12:35:38.508330    5737 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3909,"bootTime":1684953029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:35:38.508380    5737 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:35:38.513331    5737 out.go:177] * [newest-cni-758000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:35:38.520474    5737 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:35:38.520493    5737 notify.go:220] Checking for updates...
	I0524 12:35:38.528437    5737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:35:38.529816    5737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:35:38.532457    5737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:35:38.535470    5737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:35:38.538440    5737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:35:38.541610    5737 config.go:182] Loaded profile config "newest-cni-758000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:38.541827    5737 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:35:38.546489    5737 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:35:38.553365    5737 start.go:295] selected driver: qemu2
	I0524 12:35:38.553371    5737 start.go:870] validating driver "qemu2" against &{Name:newest-cni-758000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-758000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:38.553437    5737 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:35:38.555407    5737 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0524 12:35:38.555432    5737 cni.go:84] Creating CNI manager for ""
	I0524 12:35:38.555439    5737 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 12:35:38.555452    5737 start_flags.go:319] config:
	{Name:newest-cni-758000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-758000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:35:38.555520    5737 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 12:35:38.562410    5737 out.go:177] * Starting control plane node newest-cni-758000 in cluster newest-cni-758000
	I0524 12:35:38.566472    5737 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 12:35:38.566496    5737 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 12:35:38.566507    5737 cache.go:57] Caching tarball of preloaded images
	I0524 12:35:38.566599    5737 preload.go:174] Found /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0524 12:35:38.566605    5737 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 12:35:38.566680    5737 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/newest-cni-758000/config.json ...
	I0524 12:35:38.567049    5737 cache.go:195] Successfully downloaded all kic artifacts
	I0524 12:35:38.567066    5737 start.go:364] acquiring machines lock for newest-cni-758000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:38.567093    5737 start.go:368] acquired machines lock for "newest-cni-758000" in 21.834µs
	I0524 12:35:38.567103    5737 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:38.567107    5737 fix.go:55] fixHost starting: 
	I0524 12:35:38.567219    5737 fix.go:103] recreateIfNeeded on newest-cni-758000: state=Stopped err=<nil>
	W0524 12:35:38.567228    5737 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:38.571337    5737 out.go:177] * Restarting existing qemu2 VM for "newest-cni-758000" ...
	I0524 12:35:38.578509    5737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:be:30:f5:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:38.580451    5737 main.go:141] libmachine: STDOUT: 
	I0524 12:35:38.580468    5737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:38.580495    5737 fix.go:57] fixHost completed within 13.3875ms
	I0524 12:35:38.580500    5737 start.go:83] releasing machines lock for "newest-cni-758000", held for 13.403125ms
	W0524 12:35:38.580508    5737 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:38.580559    5737 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:38.580564    5737 start.go:702] Will try again in 5 seconds ...
	I0524 12:35:43.582694    5737 start.go:364] acquiring machines lock for newest-cni-758000: {Name:mkb37a68e9ac84a7c17b3cbe42c3966e2ec65b65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 12:35:43.583102    5737 start.go:368] acquired machines lock for "newest-cni-758000" in 310.167µs
	I0524 12:35:43.583262    5737 start.go:96] Skipping create...Using existing machine configuration
	I0524 12:35:43.583284    5737 fix.go:55] fixHost starting: 
	I0524 12:35:43.584167    5737 fix.go:103] recreateIfNeeded on newest-cni-758000: state=Stopped err=<nil>
	W0524 12:35:43.584192    5737 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 12:35:43.588842    5737 out.go:177] * Restarting existing qemu2 VM for "newest-cni-758000" ...
	I0524 12:35:43.595985    5737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3f:be:30:f5:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/newest-cni-758000/disk.qcow2
	I0524 12:35:43.605603    5737 main.go:141] libmachine: STDOUT: 
	I0524 12:35:43.605677    5737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0524 12:35:43.605787    5737 fix.go:57] fixHost completed within 22.500917ms
	I0524 12:35:43.605804    5737 start.go:83] releasing machines lock for "newest-cni-758000", held for 22.668042ms
	W0524 12:35:43.606214    5737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-758000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-758000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0524 12:35:43.613799    5737 out.go:177] 
	W0524 12:35:43.616927    5737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0524 12:35:43.616951    5737 out.go:239] * 
	* 
	W0524 12:35:43.619433    5737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 12:35:43.626786    5737 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-758000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (68.222834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-758000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-758000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-758000 "sudo crictl images -o json": exit status 89 (42.194042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-758000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-758000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-758000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (28.335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-758000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-758000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-758000 --alsologtostderr -v=1: exit status 89 (41.321125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-758000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:35:43.806855    5750 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:35:43.807236    5750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:43.807240    5750 out.go:309] Setting ErrFile to fd 2...
	I0524 12:35:43.807243    5750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:35:43.807345    5750 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:35:43.807598    5750 out.go:303] Setting JSON to false
	I0524 12:35:43.807699    5750 mustload.go:65] Loading cluster: newest-cni-758000
	I0524 12:35:43.808285    5750 config.go:182] Loaded profile config "newest-cni-758000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:35:43.812754    5750 out.go:177] * The control plane node must be running for this command
	I0524 12:35:43.816905    5750 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-758000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-758000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (28.739916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-758000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (28.2905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-758000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (138/253)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.2/json-events 7.39
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.27
19 TestBinaryMirror 0.38
22 TestAddons/Setup 403.63
31 TestAddons/parallel/Headlamp 12.35
41 TestHyperKitDriverInstallOrUpdate 7.71
44 TestErrorSpam/setup 29.77
45 TestErrorSpam/start 0.34
46 TestErrorSpam/status 0.26
47 TestErrorSpam/pause 0.67
48 TestErrorSpam/unpause 0.66
49 TestErrorSpam/stop 3.23
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 47.14
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 38.48
56 TestFunctional/serial/KubeContext 0.03
57 TestFunctional/serial/KubectlGetPods 0.05
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.97
61 TestFunctional/serial/CacheCmd/cache/add_local 1.29
62 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
63 TestFunctional/serial/CacheCmd/cache/list 0.03
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.33
66 TestFunctional/serial/CacheCmd/cache/delete 0.07
67 TestFunctional/serial/MinikubeKubectlCmd 0.46
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
69 TestFunctional/serial/ExtraConfig 40.12
70 TestFunctional/serial/ComponentHealth 0.04
71 TestFunctional/serial/LogsCmd 0.66
72 TestFunctional/serial/LogsFileCmd 0.64
74 TestFunctional/parallel/ConfigCmd 0.2
75 TestFunctional/parallel/DashboardCmd 6.96
76 TestFunctional/parallel/DryRun 0.21
77 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/StatusCmd 0.27
83 TestFunctional/parallel/AddonsCmd 0.18
84 TestFunctional/parallel/PersistentVolumeClaim 23.35
86 TestFunctional/parallel/SSHCmd 0.16
87 TestFunctional/parallel/CpCmd 0.3
89 TestFunctional/parallel/FileSync 0.08
90 TestFunctional/parallel/CertSync 0.45
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
98 TestFunctional/parallel/License 0.25
99 TestFunctional/parallel/Version/short 0.04
100 TestFunctional/parallel/Version/components 0.23
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.09
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
105 TestFunctional/parallel/ImageCommands/ImageBuild 2.44
106 TestFunctional/parallel/ImageCommands/Setup 2.02
107 TestFunctional/parallel/DockerEnv/bash 0.44
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
111 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.06
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.72
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.51
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.64
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
124 TestFunctional/parallel/ServiceCmd/List 0.11
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.11
128 TestFunctional/parallel/ServiceCmd/URL 0.11
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.07
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
136 TestFunctional/parallel/ProfileCmd/profile_list 0.16
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
138 TestFunctional/parallel/MountCmd/any-port 6.38
139 TestFunctional/parallel/MountCmd/specific-port 0.87
141 TestFunctional/delete_addon-resizer_images 0.16
142 TestFunctional/delete_my-image_image 0.04
143 TestFunctional/delete_minikube_cached_images 0.04
147 TestImageBuild/serial/Setup 31.03
148 TestImageBuild/serial/NormalBuild 1.68
150 TestImageBuild/serial/BuildWithDockerIgnore 0.15
151 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.12
154 TestIngressAddonLegacy/StartLegacyK8sCluster 77.63
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.82
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.23
161 TestJSONOutput/start/Command 83.35
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.3
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.24
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 12.08
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.36
189 TestMainNoArgs 0.03
190 TestMinikubeProfile 61.59
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
251 TestNoKubernetes/serial/ProfileList 0.15
252 TestNoKubernetes/serial/Stop 0.06
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStartStop/group/old-k8s-version/serial/Stop 0.06
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
277 TestStartStop/group/no-preload/serial/Stop 0.06
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
294 TestStartStop/group/embed-certs/serial/Stop 0.06
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
314 TestStartStop/group/newest-cni/serial/Stop 0.07
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-108000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-108000: exit status 85 (94.527ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |          |
	|         | -p download-only-108000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:35:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:35:43.247013    1456 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:35:43.247159    1456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:43.247162    1456 out.go:309] Setting ErrFile to fd 2...
	I0524 11:35:43.247165    1456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:43.247229    1456 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	W0524 11:35:43.247359    1456 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: no such file or directory
	I0524 11:35:43.248572    1456 out.go:303] Setting JSON to true
	I0524 11:35:43.265684    1456 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":314,"bootTime":1684953029,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:35:43.265738    1456 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:35:43.269580    1456 out.go:97] [download-only-108000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:35:43.273607    1456 out.go:169] MINIKUBE_LOCATION=16573
	I0524 11:35:43.269733    1456 notify.go:220] Checking for updates...
	W0524 11:35:43.269768    1456 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball: no such file or directory
	I0524 11:35:43.278486    1456 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:35:43.281606    1456 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:35:43.283036    1456 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:35:43.286482    1456 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	W0524 11:35:43.292553    1456 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 11:35:43.292745    1456 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 11:35:43.297497    1456 out.go:97] Using the qemu2 driver based on user configuration
	I0524 11:35:43.297517    1456 start.go:295] selected driver: qemu2
	I0524 11:35:43.297532    1456 start.go:870] validating driver "qemu2" against <nil>
	I0524 11:35:43.297588    1456 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 11:35:43.301532    1456 out.go:169] Automatically selected the socket_vmnet network
	I0524 11:35:43.307076    1456 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0524 11:35:43.307213    1456 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 11:35:43.307234    1456 cni.go:84] Creating CNI manager for ""
	I0524 11:35:43.307257    1456 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 11:35:43.307261    1456 start_flags.go:319] config:
	{Name:download-only-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-108000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:35:43.307399    1456 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:35:43.311547    1456 out.go:97] Downloading VM boot image ...
	I0524 11:35:43.311585    1456 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/iso/arm64/minikube-v1.30.1-1684536668-16501-arm64.iso
	I0524 11:35:50.541475    1456 out.go:97] Starting control plane node download-only-108000 in cluster download-only-108000
	I0524 11:35:50.541501    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:50.594774    1456 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 11:35:50.594832    1456 cache.go:57] Caching tarball of preloaded images
	I0524 11:35:50.594990    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:50.599399    1456 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0524 11:35:50.599405    1456 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:50.675234    1456 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0524 11:35:57.182614    1456 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:57.182748    1456 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:57.826801    1456 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0524 11:35:57.826976    1456 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/download-only-108000/config.json ...
	I0524 11:35:57.827004    1456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/download-only-108000/config.json: {Name:mkb01c988bf51437b0ec4fd4bf88d2090d77f626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 11:35:57.827258    1456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 11:35:57.827437    1456 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0524 11:35:58.153748    1456 out.go:169] 
	W0524 11:35:58.158931    1456 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16573-1024/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378 0x103be6378] Decompressors:map[bz2:0x1400049a838 gz:0x1400049a890 tar:0x1400049a840 tar.bz2:0x1400049a850 tar.gz:0x1400049a860 tar.xz:0x1400049a870 tar.zst:0x1400049a880 tbz2:0x1400049a850 tgz:0x1400049a860 txz:0x1400049a870 tzst:0x1400049a880 xz:0x1400049a898 zip:0x1400049a8a0 zst:0x1400049a8b0] Getters:map[file:0x14000ab97c0 http:0x14000a22aa0 https:0x14000a22af0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0524 11:35:58.158961    1456 out_reason.go:110] 
	W0524 11:35:58.166752    1456 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 11:35:58.169797    1456 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-108000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (7.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-108000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-108000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 : (7.390338708s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (7.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-108000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-108000: exit status 85 (74.951834ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |          |
	|         | -p download-only-108000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-108000 | jenkins | v1.30.1 | 24 May 23 11:35 PDT |          |
	|         | -p download-only-108000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 11:35:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 11:35:58.358144    1478 out.go:296] Setting OutFile to fd 1 ...
	I0524 11:35:58.358259    1478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:58.358262    1478 out.go:309] Setting ErrFile to fd 2...
	I0524 11:35:58.358265    1478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 11:35:58.358335    1478 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	W0524 11:35:58.358391    1478 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16573-1024/.minikube/config/config.json: no such file or directory
	I0524 11:35:58.359340    1478 out.go:303] Setting JSON to true
	I0524 11:35:58.374612    1478 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":329,"bootTime":1684953029,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 11:35:58.374668    1478 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 11:35:58.379456    1478 out.go:97] [download-only-108000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 11:35:58.383470    1478 out.go:169] MINIKUBE_LOCATION=16573
	I0524 11:35:58.379585    1478 notify.go:220] Checking for updates...
	I0524 11:35:58.390441    1478 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 11:35:58.393516    1478 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 11:35:58.396395    1478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 11:35:58.399466    1478 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	W0524 11:35:58.405480    1478 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 11:35:58.405771    1478 config.go:182] Loaded profile config "download-only-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0524 11:35:58.405802    1478 start.go:778] api.Load failed for download-only-108000: filestore "download-only-108000": Docker machine "download-only-108000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 11:35:58.405830    1478 driver.go:375] Setting default libvirt URI to qemu:///system
	W0524 11:35:58.405845    1478 start.go:778] api.Load failed for download-only-108000: filestore "download-only-108000": Docker machine "download-only-108000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 11:35:58.409387    1478 out.go:97] Using the qemu2 driver based on existing profile
	I0524 11:35:58.409396    1478 start.go:295] selected driver: qemu2
	I0524 11:35:58.409400    1478 start.go:870] validating driver "qemu2" against &{Name:download-only-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-108000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:35:58.411368    1478 cni.go:84] Creating CNI manager for ""
	I0524 11:35:58.411384    1478 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 11:35:58.411391    1478 start_flags.go:319] config:
	{Name:download-only-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-108000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 11:35:58.411461    1478 iso.go:125] acquiring lock: {Name:mk78431aa792d64459e9d8bd2fb1ce84dc420421 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 11:35:58.414476    1478 out.go:97] Starting control plane node download-only-108000 in cluster download-only-108000
	I0524 11:35:58.414484    1478 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:35:58.467409    1478 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0524 11:35:58.467435    1478 cache.go:57] Caching tarball of preloaded images
	I0524 11:35:58.467630    1478 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 11:35:58.472972    1478 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0524 11:35:58.472980    1478 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0524 11:35:58.564563    1478 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4?checksum=md5:4271952d77a401a4cbcfc4225771d46f -> /Users/jenkins/minikube-integration/16573-1024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-108000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-108000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-689000 --alsologtostderr --binary-mirror http://127.0.0.1:49309 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-689000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-689000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/Setup (403.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-514000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-514000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m43.625845916s)
--- PASS: TestAddons/Setup (403.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-514000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-kf2ww" [490c364a-3e83-4de2-89de-8ca7ba0dde32] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-kf2ww" [490c364a-3e83-4de2-89de-8ca7ba0dde32] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.013280708s
--- PASS: TestAddons/parallel/Headlamp (12.35s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.71s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.71s)

                                                
                                    
x
+
TestErrorSpam/setup (29.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-335000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-335000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 --driver=qemu2 : (29.769550125s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (29.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 unpause
--- PASS: TestErrorSpam/unpause (0.66s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 stop: (3.071461959s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-335000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-335000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16573-1024/.minikube/files/etc/test/nested/copy/1454/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-arm64 start -p functional-097000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.139692208s)
--- PASS: TestFunctional/serial/StartWithProxy (47.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --alsologtostderr -v=8
E0524 12:17:50.582492    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.590966    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.603044    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.624171    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.666416    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.748565    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:50.910698    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:51.231206    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:51.873487    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:53.155900    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:17:55.718384    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:18:00.840580    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
functional_test.go:654: (dbg) Done: out/minikube-darwin-arm64 start -p functional-097000 --alsologtostderr -v=8: (38.48075325s)
functional_test.go:658: soft start took 38.48120575s for "functional-097000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-097000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:3.1: (2.19065025s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:3.3
E0524 12:18:11.082622    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:3.3: (2.067518209s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 cache add registry.k8s.io/pause:latest: (1.713851791s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4272419104/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache add minikube-local-cache-test:functional-097000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache delete minikube-local-cache-test:functional-097000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-097000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (76.395959ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 cache reload: (1.093314459s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 kubectl -- --context functional-097000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-097000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0524 12:18:31.564997    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-darwin-arm64 start -p functional-097000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.114977667s)
functional_test.go:756: restart took 40.115119s for "functional-097000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-097000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd736326345/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 config get cpus: exit status 14 (28.8435ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 config get cpus: exit status 14 (28.226833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-097000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-097000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2980: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-097000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.857667ms)

                                                
                                                
-- stdout --
	* [functional-097000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:19:56.643662    2964 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:19:56.643785    2964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:19:56.643788    2964 out.go:309] Setting ErrFile to fd 2...
	I0524 12:19:56.643791    2964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:19:56.643870    2964 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:19:56.644889    2964 out.go:303] Setting JSON to false
	I0524 12:19:56.662251    2964 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2967,"bootTime":1684953029,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:19:56.662333    2964 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:19:56.670408    2964 out.go:177] * [functional-097000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0524 12:19:56.673442    2964 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:19:56.676443    2964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:19:56.673482    2964 notify.go:220] Checking for updates...
	I0524 12:19:56.683373    2964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:19:56.686444    2964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:19:56.689461    2964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:19:56.692321    2964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:19:56.695598    2964 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:19:56.695802    2964 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:19:56.699418    2964 out.go:177] * Using the qemu2 driver based on existing profile
	I0524 12:19:56.706401    2964 start.go:295] selected driver: qemu2
	I0524 12:19:56.706406    2964 start.go:870] validating driver "qemu2" against &{Name:functional-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-097000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:19:56.706455    2964 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:19:56.712475    2964 out.go:177] 
	W0524 12:19:56.716374    2964 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0524 12:19:56.720408    2964 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-097000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-097000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.871084ms)

                                                
                                                
-- stdout --
	* [functional-097000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 12:19:56.852058    2974 out.go:296] Setting OutFile to fd 1 ...
	I0524 12:19:56.852180    2974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:19:56.852183    2974 out.go:309] Setting ErrFile to fd 2...
	I0524 12:19:56.852185    2974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 12:19:56.852264    2974 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
	I0524 12:19:56.853623    2974 out.go:303] Setting JSON to false
	I0524 12:19:56.869564    2974 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2967,"bootTime":1684953029,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0524 12:19:56.869663    2974 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 12:19:56.874444    2974 out.go:177] * [functional-097000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	I0524 12:19:56.881456    2974 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 12:19:56.881511    2974 notify.go:220] Checking for updates...
	I0524 12:19:56.888388    2974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	I0524 12:19:56.891451    2974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0524 12:19:56.892844    2974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 12:19:56.895427    2974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	I0524 12:19:56.912522    2974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 12:19:56.916712    2974 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 12:19:56.916932    2974 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 12:19:56.918663    2974 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0524 12:19:56.925431    2974 start.go:295] selected driver: qemu2
	I0524 12:19:56.925437    2974 start.go:870] validating driver "qemu2" against &{Name:functional-097000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-097000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0524 12:19:56.925484    2974 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 12:19:56.931357    2974 out.go:177] 
	W0524 12:19:56.935480    2974 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0524 12:19:56.939396    2974 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ceca1710-28c8-4200-b6aa-9c0cfeb1efc9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013860791s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-097000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-097000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-097000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-097000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1d72548-8066-4873-a8cb-ca022f31f853] Pending
helpers_test.go:344: "sp-pod" [a1d72548-8066-4873-a8cb-ca022f31f853] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1d72548-8066-4873-a8cb-ca022f31f853] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.007827625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-097000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-097000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-097000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [421a517f-dcbc-4041-b947-642b879f9ac7] Pending
helpers_test.go:344: "sp-pod" [421a517f-dcbc-4041-b947-642b879f9ac7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [421a517f-dcbc-4041-b947-642b879f9ac7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007805167s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-097000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh -n functional-097000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 cp functional-097000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd444396276/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh -n functional-097000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/1454/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /etc/test/nested/copy/1454/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/1454.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /etc/ssl/certs/1454.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/1454.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /usr/share/ca-certificates/1454.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/14542.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /etc/ssl/certs/14542.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/14542.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /usr/share/ca-certificates/14542.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-097000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "sudo systemctl is-active crio": exit status 1 (69.002542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-097000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-097000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-097000
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-097000 image ls --format short --alsologtostderr:
I0524 12:20:01.452159    3000 out.go:296] Setting OutFile to fd 1 ...
I0524 12:20:01.452856    3000 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.452862    3000 out.go:309] Setting ErrFile to fd 2...
I0524 12:20:01.452865    3000 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.452965    3000 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:20:01.453833    3000 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.453919    3000 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.454768    3000 ssh_runner.go:195] Run: systemctl --version
I0524 12:20:01.454779    3000 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
I0524 12:20:01.490999    3000 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-097000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-097000 | 2cd4fc19c9724 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | 2ee705380c3c5 | 107MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 305d7ed1dae28 | 56.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/google-containers/addon-resizer      | functional-097000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | b005e88565d71 | 135MB  |
| registry.k8s.io/kube-apiserver              | v1.27.2           | 72c9df6be7f1b | 115MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | 29921a0845422 | 66.5MB |
| docker.io/library/nginx                     | alpine            | 510900496a6c3 | 40.6MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/localhost/my-image                | functional-097000 | 5a10b8d7a2273 | 1.41MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-097000 image ls --format table --alsologtostderr:
I0524 12:20:04.023478    3014 out.go:296] Setting OutFile to fd 1 ...
I0524 12:20:04.023601    3014 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:04.023604    3014 out.go:309] Setting ErrFile to fd 2...
I0524 12:20:04.023606    3014 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:04.023686    3014 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:20:04.024092    3014 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:04.024147    3014 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:04.024964    3014 ssh_runner.go:195] Run: systemctl --version
I0524 12:20:04.024974    3014 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
I0524 12:20:04.061627    3014 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-097000 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-097000"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"2cd4fc19c9724dfc5b05c4561e131c37249ea540d4eed93cf35c48183f760e89","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-097000"],"size":"30"},{"id":"b005e88565d715aa96012a41f893ab0c8f8bc0aa688f3d4bb91b503295431622","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"135000000"},{"id":
"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"107000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"5a10b8d7a2273ec70aa69320aac541644e440cd2ea7fd7b3cd023b1f820579ac","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-097000"],"size":"1410000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"510900496a6c312a512d8f4ba0c69586e0fbd540955d65869b6010174362c313","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40600000"},{"id":"24bc64e911039ecf00e263be2161797c758
b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"115000000"},{"id":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"56200000"},{"id":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"66500000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["regist
ry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-097000 image ls --format json --alsologtostderr:
I0524 12:20:03.940576    3010 out.go:296] Setting OutFile to fd 1 ...
I0524 12:20:03.940725    3010 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:03.940730    3010 out.go:309] Setting ErrFile to fd 2...
I0524 12:20:03.940733    3010 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:03.940806    3010 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:20:03.941182    3010 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:03.941239    3010 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:03.942049    3010 ssh_runner.go:195] Run: systemctl --version
I0524 12:20:03.942060    3010 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
I0524 12:20:03.978341    3010 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-097000 image ls --format yaml --alsologtostderr:
- id: 2cd4fc19c9724dfc5b05c4561e131c37249ea540d4eed93cf35c48183f760e89
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-097000
size: "30"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-097000
size: "32900000"
- id: b005e88565d715aa96012a41f893ab0c8f8bc0aa688f3d4bb91b503295431622
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "135000000"
- id: 72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "115000000"
- id: 2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "107000000"
- id: 29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "66500000"
- id: 510900496a6c312a512d8f4ba0c69586e0fbd540955d65869b6010174362c313
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40600000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "56200000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-097000 image ls --format yaml --alsologtostderr:
I0524 12:20:01.544043    3002 out.go:296] Setting OutFile to fd 1 ...
I0524 12:20:01.544201    3002 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.544204    3002 out.go:309] Setting ErrFile to fd 2...
I0524 12:20:01.544207    3002 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.544277    3002 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:20:01.544679    3002 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.544735    3002 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.545598    3002 ssh_runner.go:195] Run: systemctl --version
I0524 12:20:01.545608    3002 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
I0524 12:20:01.581474    3002 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh pgrep buildkitd: exit status 1 (77.803583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image build -t localhost/my-image:functional-097000 testdata/build --alsologtostderr
2023/05/24 12:20:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 image build -t localhost/my-image:functional-097000 testdata/build --alsologtostderr: (2.28095475s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-097000 image build -t localhost/my-image:functional-097000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 3e0930f3be68
Removing intermediate container 3e0930f3be68
---> 9aae956b1b73
Step 3/3 : ADD content.txt /
---> 5a10b8d7a227
Successfully built 5a10b8d7a227
Successfully tagged localhost/my-image:functional-097000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-097000 image build -t localhost/my-image:functional-097000 testdata/build --alsologtostderr:
I0524 12:20:01.713811    3006 out.go:296] Setting OutFile to fd 1 ...
I0524 12:20:01.714038    3006 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.714040    3006 out.go:309] Setting ErrFile to fd 2...
I0524 12:20:01.714043    3006 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 12:20:01.714126    3006 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16573-1024/.minikube/bin
I0524 12:20:01.714528    3006 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.715273    3006 config.go:182] Loaded profile config "functional-097000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 12:20:01.716111    3006 ssh_runner.go:195] Run: systemctl --version
I0524 12:20:01.716122    3006 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/id_rsa Username:docker}
I0524 12:20:01.752082    3006 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.175536519.tar
I0524 12:20:01.752145    3006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0524 12:20:01.755621    3006 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.175536519.tar
I0524 12:20:01.757046    3006 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.175536519.tar: stat -c "%s %y" /var/lib/minikube/build/build.175536519.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.175536519.tar': No such file or directory
I0524 12:20:01.757066    3006 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.175536519.tar --> /var/lib/minikube/build/build.175536519.tar (3072 bytes)
I0524 12:20:01.771365    3006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.175536519
I0524 12:20:01.774374    3006 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.175536519 -xf /var/lib/minikube/build/build.175536519.tar
I0524 12:20:01.777400    3006 docker.go:336] Building image: /var/lib/minikube/build/build.175536519
I0524 12:20:01.777448    3006 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-097000 /var/lib/minikube/build/build.175536519
I0524 12:20:03.952858    3006 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-097000 /var/lib/minikube/build/build.175536519: (2.175405167s)
I0524 12:20:03.952917    3006 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.175536519
I0524 12:20:03.956042    3006 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.175536519.tar
I0524 12:20:03.959129    3006 build_images.go:207] Built localhost/my-image:functional-097000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.175536519.tar
I0524 12:20:03.959143    3006 build_images.go:123] succeeded building to: functional-097000
I0524 12:20:03.959146    3006 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.966684375s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-097000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-097000 docker-env) && out/minikube-darwin-arm64 status -p functional-097000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-097000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-097000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-097000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-qssp9" [48948cbf-243f-4fde-be59-4200f9512da0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-qssp9" [48948cbf-243f-4fde-be59-4200f9512da0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.010372459s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr: (1.981063959s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr: (1.501484125s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.833874458s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-097000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 image load --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr: (1.76083125s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image save gcr.io/google-containers/addon-resizer:functional-097000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image rm gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-097000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 image save --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr
E0524 12:19:12.526889    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
functional_test.go:422: (dbg) Done: out/minikube-darwin-arm64 -p functional-097000 image save --daemon gcr.io/google-containers/addon-resizer:functional-097000 --alsologtostderr: (1.545587917s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-097000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2789: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-097000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ad76a10d-b4cc-4143-9fd1-2406e3577c8c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ad76a10d-b4cc-4143-9fd1-2406e3577c8c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.012704167s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service list -o json
functional_test.go:1492: Took "102.049791ms" to run "out/minikube-darwin-arm64 -p functional-097000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.105.4:30458
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.105.4:30458
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-097000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.164.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-097000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1313: Took "130.352958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1327: Took "29.816458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1364: Took "123.794875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1377: Took "33.725666ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port52071553/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1684955978245420000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port52071553/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1684955978245420000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port52071553/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1684955978245420000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port52071553/001/test-1684955978245420000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.320167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 24 19:19 created-by-test
-rw-r--r-- 1 docker docker 24 May 24 19:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 24 19:19 test-1684955978245420000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh cat /mount-9p/test-1684955978245420000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-097000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1422403e-f132-472f-95f2-6c52d50c03a1] Pending
helpers_test.go:344: "busybox-mount" [1422403e-f132-472f-95f2-6c52d50c03a1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1422403e-f132-472f-95f2-6c52d50c03a1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1422403e-f132-472f-95f2-6c52d50c03a1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008629417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-097000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port52071553/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4083368905/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (58.259584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16573-1024/.minikube/machines/functional-097000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_053e3763d499810ba176936b1814cd3a5443d1cd_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4083368905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "sudo umount -f /mount-9p": exit status 1 (68.355292ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-097000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4083368905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-097000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-097000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-097000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-594000 --driver=qemu2 
E0524 12:20:34.448397    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-594000 --driver=qemu2 : (31.034191208s)
--- PASS: TestImageBuild/serial/Setup (31.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-594000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-594000: (1.675540208s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-594000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-594000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-607000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-607000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m17.627135667s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons enable ingress --alsologtostderr -v=5: (13.816887875s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-607000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-070000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0524 12:23:18.283500    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/addons-514000/client.crt: no such file or directory
E0524 12:24:01.550303    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.556694    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.568828    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.590951    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.633037    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.715108    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:01.877198    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:02.199330    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:02.841839    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:04.124274    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:06.686681    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:11.808808    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
E0524 12:24:22.051134    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-070000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m23.348514708s)
--- PASS: TestJSONOutput/start/Command (83.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-070000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-070000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.24s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-070000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-070000 --output=json --user=testUser: (12.078488709s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.36s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-239000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-239000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.839834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eece9ec5-363e-42aa-b57f-0edf76aa88b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-239000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"50e058b0-3622-44d1-9c6b-f35d45c491b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16573"}}
	{"specversion":"1.0","id":"77db2691-aee5-43f0-be95-ada89f010439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig"}}
	{"specversion":"1.0","id":"670c9f24-59bd-4f30-a9d9-0463164a5778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"16a04f22-b3c3-4786-ab95-7690e2faac58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e7eec7bd-f76e-438d-984c-38e3ea8a074e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube"}}
	{"specversion":"1.0","id":"25d3cf4a-d012-40ec-8df4-7800f4575d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"287d7d17-9950-47fe-a8f7-16151cde8965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-239000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-239000
--- PASS: TestErrorJSONOutput (0.36s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-127000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-127000 --driver=qemu2 : (29.226797791s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-128000 --driver=qemu2 
E0524 12:25:23.494782    1454 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16573-1024/.minikube/profiles/functional-097000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-128000 --driver=qemu2 : (31.588038959s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-127000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-128000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-128000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-128000
helpers_test.go:175: Cleaning up "first-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-127000
--- PASS: TestMinikubeProfile (61.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-218000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (94.909167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-218000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16573
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16573-1024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16573-1024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-218000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-218000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.1775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-218000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-218000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-218000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-218000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.177875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-218000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-787000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-787000 -n old-k8s-version-787000: exit status 7 (28.855458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-787000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-601000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-601000 -n no-preload-601000: exit status 7 (27.525666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-601000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-989000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-989000 -n embed-certs-989000: exit status 7 (26.968791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-989000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-324000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-324000 -n default-k8s-diff-port-324000: exit status 7 (28.25575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-324000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-758000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-758000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-758000 -n newest-cni-758000: exit status 7 (28.517542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-758000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/253)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1: exit status 1 (79.46725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (66.329667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (66.323791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (65.793167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (67.113208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (66.245125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (66.612334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-097000 ssh "findmnt -T" /mount3: exit status 1 (70.49225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-097000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4045261492/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.49s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-220000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-220000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-220000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-220000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-220000"

                                                
                                                
----------------------- debugLogs end: cilium-220000 [took: 2.105143917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-220000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-220000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-816000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-816000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
Copied to clipboard